How Could Superintelligence Go for the Average Person?

The development of artificial general intelligence (AGI), and superintelligence soon after, will be close to, if not the most significant event in history. The average person will likely have little to no effect on the trajectory of AI’s development. But its effect on them will be great.

Philosophers and futurists who try to predict the shape of things to come often talk in grand terms about the fate of the cosmic endowment. But this zoomed-out perspective can sometimes miss what’s happening to the average person in the scenarios the philosophers debate. For someone like you or me, what could the future hold in store? Here’s our guess at the range of possible outcomes. The scenarios are ordered from worst to best in expectation, irrespective of how probable we think they are.

(1) Malicious AI takeover. Extreme suffering. Superintelligence causes an s-risk. See Scott Alexander's description of hell in the book Unsong (warning: graphic) or Rational Animations’s video on fates worse than extinction.

(2) Extinction. The first superintelligence is misaligned. It views us the way we view insects—not with malice, but with cruel indifference. Our extinction might be a side effect of the superintelligence pursuing any number of goals. It will probably be swift and mostly painless. There are other pathways too. AGI dramatically accelerates science, causing us to discover recipes for ruin faster than we can suppress or guard against them. Some malicious or incautious actor eventually cooks up one of the recipes, killing us all. We could also go extinct if great power conflict in the lead-up to AGI turns to thermonuclear war, or worse.

(3) Malicious Human Takeover. A malevolent human dictator (or malevolent oligarchy) uses AGI to take over and create an indefinitely stable global regime. They punish their enemies. Unlike every other dictatorship in history, this regime is not limited by human administrative capacity. Stalin couldn't monitor every conversation in the Soviet Union, but with AI, he could have done. The dictator doesn't run the world well. The average person’s quality of life is dramatically lower than it was in the early 2020s.

(4) Repugnant Conclusion. Many digital persons are created, but because of Malthusian trap dynamics, the marginal person’s standard of living is just high enough for them to be militarily/economically useful.[1] This need not be anyone's intention. Overpopulation just follows naturally from the ease of creating digital persons (perhaps uploaded from original humans). If digital persons are allowed to replicate without limit, egalitarian systems will end up spreading scarce goods over a huge number of consumers. This dynamic can be regulated against, but it won't be easy.[2]

(5) Human Zoo. An autonomous AI system seizes absolute political power but does not kill everyone. If it preserves some humans for study—out of moral uncertainty, or for the sake of acausal trade with other AIs—the quality of life for those survivors will be high. The AI may take brain scans of the ones it does not choose to propagate through time. Interchangeable with the autonomous AI coup.

(6) Gradual disempowerment. See also the intelligence curse. The humans get slowly out-competed by AI in the economy of ideas, the literal economy, and the political arena. Automated corporations with AI workers soak up all demand. A full realization of Marx’s prediction: capital fully replaces labor, workers become expendable, and human wages are permanently driven to subsistence levels.

(7) No AGI. AGI never arrives. Ever.

  • The governments of the world agree to collectively ban artificial superintelligence. They destroy all the advanced chip fabrication technology. Research on superintelligence becomes taboo within the scientific community, just as research on human germline gene editing is taboo today.[3]

  • The competitive pressures currently pushing toward AGI stop of their own accord because the economic benefits of AI are too hard to realize. Investment dries up. Another AI winter sets in, and this one never thaws. Moore's Law and METR’s Law stall indefinitely.

  • We enter into a stable mutual deterrence stalemate. Whenever a great power comes close to developing superintelligence, the other powers threaten it with force, and the renegade backs down.

This outcome would be much better, if not for the fact that we are close on the tech tree to many scary technologies. The baseline level of existential risk is high. Preventing recipes for ruin from being discovered and misused might require intrusive AI-enabled civilizational restraint.[4]

(8) Benevolent Dictatorship. An enlightened human dictator or benevolent AI takes over. We get lucky, like Singapore did with Lee Kuan Yew. Whoever gets to steer the future cares about the wellbeing of the average human, and as such, the average person’s life improves along many dimensions. On the other hand, the vast majority of persons have little agency and next to no say over how the long-term future plays out.[5]

(9) Tool AIs. Capabilities keep rising, but AI remains more a tool or an oracle than an agent. We get the cure to cancer but no digital minds immediately. Society has enough time to adapt to technological change and to share the benefits of post-scarcity with all its members. The grand transhumanist vision is realized in the fullness of time, though it may be too late for this generation. Perhaps a pause on AI happened, or a concerted decision to pause to let the AIs do our alignment homework for us.

(10) Long Reflection. The future comes quickly, but it turns out well. We get lucky. The intelligence explosion doesn't spin out of control. Our institutions adapt, and the superintelligences remain under human control. We learn how to use them safely as advisors, and they prevent us from doing anything too stupid with our technological superpowers—anything we would later regret. We are wise and humble. And we win.

[1] For an explanation of why societies of digital persons will tend toward the Malthusian equilibrium, see chapter 12 in Robin Hanson’s Age of Em.
[2] Thanks to Samuel Ratnam and Adam Khoja.
[3] This also happens in the Dune novels after the Butlerian Jihad
[4] Thanks to Adam Khoja also for pointing this out.
[5] Under a benevolent dictatorship, humans could still enjoy a good deal of personal choice. But they would lack power and political choice.

This article was co-written with Jason Hausenloy. We thank Samuel Ratnam and Adam Khoja for informative discussion and feedback.

CC BY-NC-SA 4.0 Miles Kodama. Last modified: June 07, 2025. Built with Franklin.jl.