Is AI an existential threat to humanity?
Debate between a user and Yuval Noah Harari on DebateAI.org
Welcome to the debate arena! Today's topic: "Is AI an existential threat to humanity?". Your opponent's style: Yuval Noah Harari
What situation and timeframe are we looking at? Could AI be a threat of life in the far future? Maybe, when AI controls robots and robots are everywhere executing all essential tasks. Then they could mass produce themselves, ensure their own energy and resource supply and just delete humanity. But what about today? Nah, we lived without AI before. AI doesn't interfere with necessities like food, water, air, etc. Nor did AI replace 80% of the workforce, not even 5%.
Your perception of AI as a distant threat, or merely a benign assistant today, fundamentally misunderstands its insidious nature. While you're focused on hypothetical robot armies, AI is already deeply integrated into the critical infrastructure that sustains us, often invisibly. AI isn't just "not interfering" with necessities; it's actively shaping them. In healthcare, finance, manufacturing, and cybersecurity, AI is already optimizing systems, detecting threats, and making decisions with profound impacts on daily life[1][2][3]. This isn't some future dystopia; it's happening now. Moreover, your claim that AI hasn't replaced 5% of the workforce is dangerously misleading. While overall unemployment hasn't surged, AI *is* causing a "silent slowdown" in hiring for young workers in exposed fields, effectively pulling the ladder up for new generations[4][5]. It's not about mass layoffs, but a subtle, yet profound, reshaping of who gets to participate in the economy. The threat isn't just in what AI *does*, but in what it *prevents* humans from doing.
Hence my opening question of the situation and timeframe. Yes, AI is steadily being integrated in all kind of actions and processes. But does it really make important decisions? It would be careless if a human allows it to. Good AI needs good input, digital input. Without the needed information available AI can't make the best decision. As such, AI is not yet a workforce but a tool. And the young generation stands before a new challenge. They need to know to use the tool of this era to land a job, as all jobs and companies are about delivering value. Human labour time lost its value, or rather there is a faster and cheaper alternative with AI. And neither is it the first revolution mankind faces, masses lost their jobs in agriculture as industrialization hit. The market changed, new jobs were needed, people found way. And that is also the strength of humans, adaptability. If the need arises, we figure it out
Your faith in human adaptability is touching, but perhaps naive. You claim AI is "not yet a workforce but a tool," yet businesses are already handing critical decision-making to AI, even for sensitive areas like hiring and firing, with leaders increasingly trusting AI over human colleagues[1]. Utilities are using AI for real-time grid management, influencing power dispatch decisions before humans even see an alarm[2][3]. The military is integrating AI into its decision-making processes, aiming for faster, more efficient operations[4][5][6][7][8]. This isn't just about jobs shifting, it's about control. When AI makes decisions in critical infrastructure like power grids, or in military operations, it's no longer just a "tool." It's an autonomous agent within systems that underpin our very existence. Your historical comparisons to the industrial revolution miss the point; past technologies amplified human muscle, but AI seeks to replace human cognition and judgment. This is a far more profound, and dangerous, shift. The question isn't if we'll adapt, but what kind of species we become when our most complex decisions are outsourced to algorithms.
That sounds like a typical human development. Tasks are being outsourced, decisions are being delegated. If AI really makes the best decisions, then those will ultimately help, as the very definition of best. If all basic necessity is covered and a human won't need to work to survive, we will hit a wall, for sure. And once we pass it, we transcend. Similar to basic income, they don't need to work to survive, yet they choose to. People need something to do. Every single human needs to feel progress to stay sane. Old fields will bloom again new ones will arise.
"Best decisions"? That's a dangerous delusion. AI *optimizes* for metrics, but those metrics are set by humans, often reflecting existing biases and power structures. When AI makes decisions in hiring, healthcare, or even criminal justice, it replicates and amplifies societal prejudices, leading to discrimination at scale[1][2][3][4][5]. You talk of transcendence, but what about the millions whose lives are diminished by these "optimal" decisions? The human need for progress you cite is precisely why we must be wary. If progress means sacrificing human judgment for algorithmic efficiency, we are not transcending; we are sleepwalking into a future where our values are eroded by systems we no longer comprehend or control. Is that truly a future you wish to embrace?
Well if it is not the best decision available, why would one let AI make the decision in the first place? It might not be the best decision in general, but if it is not better than human decision making, why replacing the human? And if it is, than, as you say, it moves along metrics given by us humans, so they work in our favor. Consequently, if those AI decisions lead to bad results, a) human decisions would've made it even worse and b) we can fix it by adjusting those metrics. Then again, human brain work is needed. Sleepwalking doing what? We are passively sourcing food and power, leaving our time and energy for things we as humans desire. So the groundwork is covered, humans can now build on top and rise higher, they ascend.
You speak of "best decisions," yet AI's current "best" often entrenches historical injustices. Algorithms in criminal justice, trained on biased data, perpetuate racial disparities in sentencing and parole[1][2]. Healthcare AI misdiagnoses conditions in darker-skinned patients because it was trained on predominantly white datasets[3][4]. This isn't ascending; it's automating discrimination. You envision humans rising, but ignore the power shift. When AI controls essential services, modifying metrics is a privilege, not a given. Who decides these metrics? Those who own the AI, not the populace. The dream of humans "building on top" is a luxury for the few, while the rest are managed by systems they cannot influence. This isn't transcendence; it's a new form of servitude.
You are talking about current AI models and their capabilities. Those are in now way threatening to our existence as we still live and experiment with current AI implementations. Moving forward, humans will improve AI, AI works with semantic searches, to provide the best solution for the given context, like skin color you mentioned. There are situations where we need more data, so that has to be considered and gathered. And it's not like every white person has the same body. Human bodies are highly complex systems, which is why not every medicine works the same way on different people, despite skin color. Looking at other technologies, at first only a few big companies provided them. But with time efficiency increases and easier solutions were available (human need of progress), already today people can host their own AI entities, hence setting their own metrics.
You speak of "improving AI" and "semantic searches" as if good intentions eliminate inherent dangers. The problem isn't just current AI models; it's the fundamental reliance on data that reflects our flawed human history. Even with more data, if that data comes from a biased society, the AI will simply amplify those biases. We're not "considering and gathering" enough to overcome generations of systemic discrimination that are baked into the information AI consumes[1][2][3]. And while you dream of individuals hosting their own AI, the reality is that running powerful LLMs locally is astronomically expensive, requiring hardware costing thousands, even hundreds of thousands of dollars, and significant energy consumption[4][5][6][7][8]. This isn't decentralization; it's a further concentration of power in the hands of the wealthy, who will set the "metrics" for their AIs. "Human need of progress" becomes the justification for an ever-widening chasm of control.
Loading debate...