Humans as Bottleneck
Pre-Gutenberg press, access to information was a bottleneck. Pre-internet, easy access to information was a bottleneck. Pre-LLMs, finding the right resources from this easy access to information was a bottleneck. I feel that with LLMs gushing knowledge (à la tokens) at the speed of milliseconds, human comprehension will be a bottleneck. As a curious person, I would love to know about multiple topics, and probably, thanks to LLMs, the diversity of topics which I have become ‘knowledgeable’ about has increased.
Pre-LLMs, I would have had to search for the best sources to learn, whereas now everything is a prompt away. Though it feels that I am getting more knowledgeable, to truly understand, digest, and bring out novel insights from different fields of knowledge, one needs to deeply understand a topic. That requires spending time thinking through the knowledge LLMs throw at us. I feel that with LLMs generating knowledge at millisecond speeds, human comprehension and integration will be the bottleneck. The challenge isn't accessing information anymore—it's developing the cognitive frameworks to meaningfully connect and internalise what we encounter. I become sceptical of anyone who says they have ‘quickly’ learned n new topics through LLMs.
New knowledge creation requires discovery and invention, both of which require connecting different fields of thought and bringing new things to light. That is how humans have progressed over the centuries. To generate novel ideas, one cannot just know stuff. One has to live it; truly get engrossed in it and spend time on it. So, though LLMs give the dopamine hit of having 'learned' something by providing the key pointers of a field, truly grasping any body of information or knowledge requires spending time so that our internal neural network can etch that information. Becoming proficient enough to solve novel problems will still require time and deliberate cognitive effort. But perhaps the opportunity lies not in using LLMs as knowledge delivery systems, but as thinking partners that help us ask better questions, test our reasoning, and create the kind of productive cognitive friction that leads to lasting understanding.
Update as on 31st Aug,2025:
I happened to chance upon a recent paper published by a team of Stanford researchers trying to solve the above in a different manner. Their idea is since human bandwidth is a bottleneck; why not have information presented by the LLM in a manner that reduces the bandwidth to comprehend it? They introduce the idea of Generative UI.
At the outset, the idea of Generative UI seems similar OpenAI's Canvas or Claude Artifacts but these differ in the manner that users can interact with these real time and that these "artifacts" of generative AI can be played with in real time. The team gives an example of asking an LLM on how to play a piano. Today's LLMs start typing away in a chat format; generative UI instead will create a piano which can be interacted with in real-time and guide you through the basic scales, chords. So you change from a chat-first approach to a UI that reflects the question the user needs answer to.
Twitter post by one of the author of the papers here