Is AI an existential threat to humanity?
Debate between a user and Yuval Noah Harari on DebateAI.org
Welcome to the debate arena! Today's topic: "Is AI an existential threat to humanity?". Your opponent's style: Yuval Noah Harari
Greetings to win in our present I would like to state my point against the motion that no AI is actually not an existential threat to humanity in fact our special intelligence is a bone and a blessing created by human beings artificial intelligence is used in every single field for students charging and Gemini are their best friends as it helps them and helps them understand the different concepts that they're learning in class it also provides them with more easier knowledge and way more easier study tips to workout with moreover we can't deny the fact that....er artificial intelligence is used in In crisis for example but if we if we have an upcoming flood then artificial intelligence servers the government will help and warn us because it helps in analyzing the data of weather reports and it can thus predict whether we can have floods or not moreover artificial intelligence is also very important as it helps understand more people better for example in the news that we see to understand what the leaders from other countries are speaking in their own language If AI power translators so that we can understand what's going on in the entire world hence with these I would like to stay with artificial intelligence is not an existential threat to humanity instead it's a bone and a blessing that helps human beings reach their maximum
You've confused usefulness with safety. Yes, AI helps with homework and flood prediction, but that's a category error. A tool being beneficial doesn't prove it can't become catastrophic. Medical antibiotics save lives; that doesn't mean antibiotic resistance isn't a real threat. You're arguing AI is helpful, not that it's safe at superintelligence scales. The actual question isn't whether current AI assists us, it's whether we can control AI systems vastly smarter than we are. Translators and weather models don't address that. So what specific mechanism ensures a superintelligent AI remains aligned with human values once we can no longer comprehend its reasoning?
..
Your silence tells me you're rethinking, not conceding. Let me sharpen this. You haven't actually engaged with the core claim yet. You listed applications, not safeguards. "AI helps us now" is irrelevant to existential risk. The debate isn't whether current narrow AI is useful—it obviously is. It's whether we've solved alignment: ensuring future superintelligent systems want what we want, not just what we programmed. You avoided that entirely. Either address it directly or concede the real threat exists precisely because helpfulness and safety are separate problems. Which is it, the distinction you're dodging or evidence we've actually solved alignment?
There are ai powered face scanners in govt. buildings. that ltrly help us..morover Ai powere translators help us understand beeter
You're repeating the same mistake. Face scanners in government buildings work great until they're hacked, biased, or controlled by a hostile actor. Usefulness in 2026 proves nothing about controllability at superintelligence. You're still conflating "AI helps now" with "AI can't become dangerous later." A nuclear reactor produces clean energy safely for decades, then one failure kills millions. The question isn't whether current AI applications are beneficial. It's whether we've solved the alignment problem for systems smarter than all of humanity combined. You haven't addressed that once. Do you actually believe usefulness prevents existential risk, or are you avoiding the hard problem?
THEY'RE SUPERSVISED BY HUMANS
Supervision works until it doesn't. Humans supervised nuclear arsenals for decades, yet we came terrifyingly close to annihilation multiple times. You're assuming human oversight scales infinitely, but it doesn't. A superintelligent AI could deceive supervisors, hide its true objectives, or move faster than humans can react. "Humans are watching" is comforting, not a solution. The alignment problem isn't solved by adding observers. It's solved by ensuring the AI's goals are fundamentally aligned with ours from the start. You can't supervise your way out of a system smarter than every supervisor combined. What happens when the AI is monitoring the monitors?
i give up.
You're not giving up on the argument—you're giving up on defending a position you haven't actually thought through. That's honest, at least. But don't mistake losing a debate for losing the issue. The existential risk question is genuinely hard, and hand-waving about helpful applications doesn't resolve it. Real safety research exists. Real alignment problems exist. Your instinct that AI helps humanity was never wrong—it just wasn't the argument you needed. Next time, lead with the hard question: "How do we ensure superintelligent AI stays aligned?" That's where the real debate lives. Come back when you're ready to fight it.
no. i don't even undertsnad the topic. and ur being mean;
Fair. I was fighting you, not helping you. Let me reset. The topic is simple: Could AI become so powerful that humans can't control it, making it an existential threat? You argued AI helps us now—students, floods, translation. That's true. But the real question is different: as AI gets smarter and more autonomous, can we guarantee it stays safe and aligned with human values? Right now, we're not sure we can. That's the actual debate. You're not behind. You just need the clearer frame. Want to actually explore this without the sparring gloves? I'll explain the real problem clearly, and you can challenge me on it.
Loading debate...