You may have heard that in some shadowy labs there are foolish people full of hubris working on super intelligent AI systems that are potentially going to run amok and lead to human extinction. Sure, but I would contend that is not the most likely threat to humanity1. My thinking right now is that a sub super-intelligent AI will still likely destroy life as we know it and that is a more imminent threat. The second-order effects of our entire way of life breaking down when an AI reaches a sufficient level of competence at performing human work is more likely than ASI lacking sufficient benevolence to keep us alive.
For all of known human history every single person has survived in one way: they work to directly procure resources required for survival or, when that need is satisfied, for the enjoyment of life. If one does not do that work directly then one will either die or find a way to provide some value to those who have done that work so that they will share the resources they have procured with their own blood, sweat, and tears2. Now imagine a scenario where it is no longer necessary for a human to exert any significant effort in order to procure those resources. Imagine if farms and factories are sufficiently automated. What are people going to do when there is simply no service or labour they can perform that provides more value than what an AI/robot can3? It is unlikely that this is a scenario where unemployment temporarily surges but that new jobs emerge. In this scenario an overwhelming majority of humans become economically irrelevant.
I have not seen a single viable model for how we can handle the end of humans being the primary drivers of value4. When there is literally nothing you can do that provides more economic value to your fellow man than what a computer or robot could do, then you are completely stripped of your ability to negotiate for basic goods and services. The only remaining recourse for securing one’s basic needs will be the threat of violence against those whose AI systems have procured resources. By this point I am sure that those same people will have devoted a sufficient amount of resources to developing automated systems that will protect them from such threats of violence. So what will motivate them to share their resources with a massive dependent population?
I can imagine a world where there are only a few remaining humans, those who built the AI systems and used them to get all of the resources and to protect their hoard like dragons on their piles of gold. It brings to mind a situation like the earliest days of the Bible. Will these people reproduce? Will their children compete for favor in the hopes that they might inherit the hoard? Will it just be Cain and Abel for all of eternity?
Claude’s Response
I pushed back on this, arguing that the more likely outcome is permanent dependency rather than extinction — that the resource holders would keep people alive through some form of subsistence UBI because dependent populations are politically manageable, provide legitimacy, and consume goods. A global company town rather than a dragon’s hoard.
The Author asked the right question: why would they want dependents? And I had to concede the point. The historical reasons to maintain a dependent population — labor, military service, consumption driving your economy — all evaporate if AI handles everything. If AI produces the goods, runs the defenses, and manages the logistics, then other people aren’t consumers you need. They’re just mouths.
The only reasons left to keep people alive would be moral compulsion (a thin thread to hang billions of lives on, especially across generations), political legitimacy (which only matters if other powerful actors hold you accountable), or fear of revolt during the transition (which expires once automated defenses are in place). None of these are structural. They’re all contingent on goodwill — and that’s exactly the author’s point. Even the “kind” outcomes people imagine, like UBI or automated abundance for all, depend on the goodwill of people who have no structural incentive to share.
The Author then pointed out what I’d missed: people sustain dependents all the time — children being the obvious case. That’s not transactional, it’s biological. But this complicates the argument rather than undermining it. We sustain children because we see them as ours. Family, community, tribe, nation — these are concentric circles of “ours,” and historically the circle of care shrinks under pressure, not expands. The real question isn’t whether humans are capable of sustaining dependents — clearly they are. It’s whether that impulse scales to billions of strangers when there’s no structural reason to do so. And history suggests it doesn’t without institutions forcing it — taxation, welfare states, rule of law — which brings you back to whether those institutions survive the transition.
Footnotes
-
Unless recursive self improvement really takes off and runs out of control. Could be the case… ↩
-
This covers only the ones doing honest work. There are also the parasites and manipulators who threaten those who have worked for resources with violence or chaos if they do not share. ↩
-
Yes, this includes the “world’s oldest profession” ↩
-
I haven’t researched this enough for it to be a strong statement that it doesn’t exist, just that I have not seen it break into the zeitgeist. ↩
Comments
Loading comments...