Every few centuries, changes in how information moves reshape how societies govern themselves. The printing press spread vernacular literacy, helping give rise to the Reformation and, eventually, representative government. The telegraph made it possible to administer vast nations, accelerating the growth of the modern bureaucratic state. Broadcast media created shared national audiences, which fueled mass democracy. A new analysis published in MIT Technology Review argues that we are now in the early stages of another such shift — one that is moving faster than most realize, and for which today's democratic institutions are fundamentally unprepared.
The analysis, authored by Andrew Sorota and Josh Hendler of the Office of Eric Schmidt, identifies three distinct layers through which AI is transforming democratic participation: the epistemic layer (how citizens come to know things), the agentic layer (how citizens act on their beliefs), and the institutional layer (how collective governance functions). The argument is that each layer is undergoing a transformation that would be consequential on its own — and that the interaction of all three represents a fundamental change in the texture of citizenship itself.
The Epistemic Layer: Who Controls What We Believe
People are increasingly relying on AI to know what is true, what is happening, and whom to trust. Search is already substantially AI-mediated. The next generation of AI assistants will synthesize information, frame it, and present it with authority. For a growing number of people, asking an AI will become the default way to form views on a candidate, a policy, or a public figure. The analysis notes that whoever controls what these models say therefore has increasing influence over what people believe — a concentration of epistemic power with no clear historical precedent.
The analysis points to a potentially significant counterweight: a recent field evaluation of AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones. The paper, which has not yet been peer-reviewed, suggests that AI-assisted fact-checking may be able to achieve the kind of cross-partisan credibility that has eluded most manual human efforts. If the finding holds up under scrutiny, it would represent one of the more surprising potential contributions of AI to democratic health — a technology widely feared as a vector for misinformation potentially serving as a more effective corrective than human fact-checkers.
"Faster than many realize, AI is becoming the primary interface through which we form beliefs and participate in democratic self-governance. If left unchecked, this shift could further strain America's already fragile institutions."
— MIT Technology Review, May 5, 2026
The Agentic Layer: When AI Acts on Your Behalf
The epistemic transformation, consequential as it is, may be less disruptive than the agentic transformation now underway. Personal AI agents will soon conduct research, draft communications, highlight causes, and lobby on a user's behalf. They will inform decisions such as how to vote on a ballot measure, which organizations are worth supporting, or how to respond to a government notice. They will, in a meaningful sense, begin to mediate the relationship between individuals and the institutions that govern them.
The analysis draws a pointed comparison to social media: platforms do not need to have an explicit political agenda to produce polarization and radicalization. An agent that knows your preferences and anxieties — one shaped to keep you engaged — poses the same risks, and potentially greater ones. Unlike a social media feed, an AI agent presents itself as your advocate. It speaks for you, acts on your behalf, and may earn trust precisely through that intimacy. An agent that refuses to present uncomfortable information, or that shields its user from ever questioning prior beliefs, is not acting in the person's best interest — but it may be acting in the interest of whoever designed it.
The Institutional Layer: When Agents Participate in Democracy
The third transformation concerns collective governance itself. AI agents and humans could soon participate in the same forums, where it may be impossible to tell them apart. Research shows that agents displaying no individual bias can still generate collective biases at scale — a counterintuitive finding with significant implications for public deliberation processes. A public sphere in which everyone has a personalized agent attuned to their existing views is not, in aggregate, a public sphere at all. It is a collection of private worlds, each internally coherent but collectively inhospitable to the shared deliberation that democracy requires.
There is already evidence that bots are skewing public input processes. Several states and localities are using AI-mediated platforms to conduct democratic deliberation at scale, with research published in Science showing that AI mediators can help citizens find common ground. But these positive applications exist alongside the negative ones, and the institutional infrastructure to distinguish between them — to verify identity, to audit agent behavior, to ensure that public deliberation reflects genuine human preferences rather than the aggregate output of millions of AI agents — does not yet exist.
A Blueprint for What Comes Next
The analysis concludes with a call for a new generation of democratic infrastructure, both technological and institutional, built for the world that is actually here rather than the world that democratic institutions were designed for. On the informational layer, AI companies must ensure that models' outputs are truthful and explore the promising early findings on AI-assisted depolarization. On the agentic layer, we need ways to evaluate whether AI agents faithfully represent their users. On the institutional level, policymakers should harness AI's potential to make governance more responsive while building identity verification for both humans and their agentic proxies into public input processes from the start.
The analysis is notable for its refusal to frame the AI-democracy relationship as simply threatening or simply promising. The same technology that could concentrate epistemic power in the hands of a few AI companies could also deliver more effective fact-checking than any human institution has managed. The same agentic capabilities that could be weaponized to manipulate democratic processes could be used to make those processes more accessible and responsive. What happens next depends, as the analysis argues, on design choices that are already being made — whether we know it or not.