The Collapsing Mind: How Language Topology Shapes Our Imagination

2025-10-28
3 min read

LLMs Can Get “Brain Rot”

Recently, we published a paper examining what happens when large language models consume too much highly popular but low quality content such as viral tweets and short social posts. The results were striking: reasoning ability fell by 23 percent, long-context memory decreased by 30 percent, and simulated personality tests showed spikes in narcissism and psychopathy. Even after retraining on clean, high-quality data, the model struggled to fully recover.

We call this phenomenon “LLM Brain Rot”, as it closely resembles what can happen to the human mind when it is continuously exposed to shallow, repetitive, and emotionally charged information.

Humans Can Experience “Model Collapse”

The analogy also works in the opposite direction. Humans can also experience “Model Collapse”.

Children, whose brains are still underfitted, surprise us with wild imagination and unpredictable speech. Adults, by contrast, optimize for efficiency and stability. We become repetitive learners, confident in limited experience, reluctant to explore new dimensions of thought. Perhaps the everyday life of an adult mirrors a collapsed model, converged too early on a narrow set of experiences and repeating the same outputs without further learning. Over time, many people grow more stubborn and less receptive to new perspectives, a sign of slow cognitive overfitting: excessive dependence on a small personal dataset and a preference for the comfort of a local minimum rather than the effort of climbing out of it.

An interesting paper The Overfitted Brain (2020) provides a compelling clue. He suggests that dreaming acts as a regularization mechanism for the overfitted mind, injecting randomness and abstraction to restore flexibility.

The Topology of Language and the Erosion of Imagination

When we dig why human more easily got brain rot today, one interesting perspective is to study the nature of human language.

Language representation contains both variability and invariance. The stability that persists within change reflects a topological property. Language and the physical world can be viewed as homeomorphic, meaning they share a structural correspondence. Even when linguistic forms are modified, paraphrased, or reduced, they still refer to the same underlying reality.

This idea implies that understanding language requires topological computation inside the brain. When we read, we are actively performing these transformations, reshaping form while preserving meaning. Reading provides the topology, and our mind reconstructs the geometry. Watching videos, in contrast, bypasses this process. The geometric form is presented directly, leaving little room for internal reconstruction. Over time, the brain loses opportunities to practice its own topological operations. This may weaken imagination and comprehension, both of which depend on the ability to rebuild geometry from abstract representation.

Short videos and repetitive social content therefore contribute to what might be called human brain rot. They fail to exercise our topological reasoning and gradually compress our imaginative range. AI now amplifies this tendency toward linguistic centralization. By favoring the most probable words and phrases, it compresses our expressive space and makes language increasingly uniform.