Subtitle: Who gonna read all this?

In a world increasingly shaped by artificial intelligence, we marvel at its ability to create: text, images, music, video, and entire universes of content brought forth at the click of a button. But less examined is AI’s growing role in helping us consume that same content. From summarizers and chat-based search to algorithmic feeds that know us better than we know ourselves, AI has taken its seat on both sides of the information equation.
Here is where the paradox begins.
What happens when the very systems flooding our attention also become the ones responsible for filtering it? As creators and curators collapse into the same machine, we face a recursive future. One where the promise of AI would begin to undermine itself.
AI and Human Attention in the Age of Infinite Output
Since the generative AI boom began, we’ve seen an exponential rise in machine-made media. ChatGPT, Claude, Gemini, and other systems now generate BILLIONS of words per day, not to mention images and videos. In this reality, content is no longer scarce. Attention is.
This feels productive. Empowering, even. But it also reveals a growing imbalance. The rate at which content is being created now far exceeds the human capacity to process or meaningfully engage with it. Like an assembly line with no off switch, generative AI risks turning information into excessive noise.
AI as Curator: Reshaping Human Attention
In response, another class of AI tools has emerged. These do not create, but filter. We rely on AI to summarize the articles we don’t have time to read (TL/DR), recommend the next thing to watch or buy, and even prioritize our communications. Have we gone from hating algorithms to creating our own?
Economist Herbert Simon states, “A wealth of information creates a poverty of attention.” AI promises to solve this only by deepening the dependence. The same machine that overwhelms us becomes the one we trust to sort the overwhelm. It is like hiring the arsonist as your fire marshal.
(A Quick Math Detour, Because Why Not?)
The distributive property, in case your high-school algebra teacher’s voice isn’t still echoing in your head, is the math principle that says a(b + c) = ab + ac. In plain terms, if you’re multiplying something across a sum, you just multiply it by each part individually, then add the results. It’s how we learned to simplify expressions, and how your 8th grader avoids emotional breakdowns. Mathematicians love it because it brings order to chaos. When used to simplify equations, that multiplier that exists on both sides of the equation… becomes meaningless and gets cancelled out.
The Feedback Loop Between AI and Human Attention
This dynamic raises a provocative question, “Will AI negate its own value socially?”
Mathematically, the distributive property tells us that multiplying across terms yields a cumulative effect. But do social systems behave like equations? When AI simultaneously drives both the explosion of content and the narrowing of what we see, the result may not be not balance, it could be recursion.
In this loop, something critical is at risk, and it’s our own human judgment. The more we outsource discernment, the less we practice it. The more AI shapes our input and output, the more our cultural feedback loop becomes synthetic, predictable, and manipulated.
This isn’t just theoretical. It’s already happening in how we consume media, interpret news, and even form opinions. Attention is no longer just a resource, It is infrastructure, and AI is rapidly becoming its gatekeeper.
What We Lose in the Loop
This recursive system creates several social costs:
- Cultural Flattening: If AI curates for mass appeal or engagement optimization, creative risk and subversive thought decline. Serendipity is replaced by sameness.
- Cognitive Atrophy: Constant summarization and recommendation dull the mental muscles needed for exploration, synthesis, and curiosity.
- Ethical Drift: The more we rely on black-box algorithms to decide what matters, the more opaque and unaccountable our information diets become.
These are not abstract concerns. They shape how societies learn, how democracies function, and how meaning is created.
The Future of AI and Human Attention
So, there are two probable, likely and maybe inevitable futures:
- In one, AI serves as an amplifier of human capacity. It helps us cut through the noise while augmenting our ability to create meaning.
- In the other, AI becomes the author, editor, and audience of its own output. This erases the human layer in exchange for efficiency.
The path we take will depend on our understanding and what we choose to protect.
Our attention is a sacred resource. If we intend to value it, that may mean designing systems that do not just optimize for consumption, but cultivate engagement with intention. It could mean allowing friction where necessary, allowing organic discovery, and resisting the allure of infinite convenience.
Protecting Human Attention in an AI-Driven World
I share this not just as a technologist and media producer, but as someone deeply invested in helping organizations navigate the cultural and strategic implications of emerging technology. If your company or institution is exploring how AI intersects with attention, creativity, or human judgment, contact me.
I am available for speaking, strategic moderation, and advisory sessions designed to help teams think critically, act boldly, and further understand the impact of innovation on your industry and customer base.
I’m easy to reach. Contact me regarding speaking at your innovation related event, or to produce media for your brand.
– Bryndan