I Asked AI to Argue Against My Career Change. It Found the Thing I Was Hiding From Myself.
I Asked AI to Argue Against My Career Change. It Found the Thing I Was Hiding From Myself.
The Catholic Church invented a job called the Devil's Advocate. Their task: when someone was being considered for sainthood, argue against them. Find the flaws. Surface the doubts. Make the case that this person shouldn't be canonized.
The role existed for 400 years because the Church understood something about human decision-making: when we want something to be true, we stop looking for reasons it might not be. The only fix is to assign someone the explicit job of looking anyway.
I thought about this when I was considering leaving my job to start a company. I had reasons. Good ones. A spreadsheet. Conviction. What I didn't have was anyone willing to tell me I might be wrong — because the people who cared about me wanted to be supportive, and the people who might disagree had no incentive to get involved.
So I tried something strange: I asked an AI to be my Devil's Advocate.
The Setup
I'd been at my company for four years. Good salary, good title, work I didn't hate. But I had an idea for a business, some savings, and the nagging feeling that if I didn't try now, I'd wake up at 45 wondering what happened.
My wife was supportive. My friends said go for it. My therapist helped me process the fear. Everyone was helpful. Nobody pushed back.
I knew I was in a bubble. I could feel myself collecting permission instead of information. So I went to DebateAI and set up a custom debate: "I should quit my job to start a company." I told it to argue against me.
What followed was uncomfortable in a way I didn't expect.
Round 1: The Financial Arguments (Expected)
The AI opened with the standard case. Runway calculations, failure rates, the opportunity cost of foregone salary and equity vesting. I'd heard all of this. I had answers.
"Your spreadsheet assumes you'll raise funding within 6 months," the AI said. "The current median time to seed round is 9 months, and that's for founders with prior exits. First-time founders average 14 months. Your runway math doesn't account for this."
Okay. Good point. I adjusted my model.
"You've budgeted $3,000/month for health insurance. The actual cost for a family of three in your state, at your age, without employer subsidies, is closer to $2,200 for a bronze plan with a $7,000 deductible. Your real expected healthcare cost, including out-of-pocket, is $4,500/month. This changes your runway by 4 months."
I hadn't actually priced insurance. I'd estimated. The AI was right.
We went back and forth on the financials for 20 minutes. By the end, my "18 months of runway" had become "11 months if nothing goes wrong, 7 if several things go wrong." Not fatal, but different from what I'd been telling myself.
Round 2: The Market Arguments (Also Expected)
The AI pivoted to the business case.
"Your competitive advantage is domain expertise from your current job. But your non-compete is 12 months, which means you can't sell to your existing network during the period when you're most cash-constrained. You're planning to build in a space where your only advantage is temporarily unavailable."
I hadn't thought about the non-compete that way. I'd been thinking about it as a legal constraint, not a strategic one.
"Your customer discovery consists of conversations with 7 people, all of whom you knew before having the idea. This is a biased sample. Three of them work in adjacent roles at your current company and have incentive to be encouraging. None of them have budget authority. You don't know if people who can actually buy would actually buy."
True. I'd been calling it "validation." The AI was calling it "talking to friends."
Still manageable. Uncomfortable, but manageable.
Round 3: Where It Found the Thing
Then the AI said something that stopped me.
"You describe your motivation as wanting to 'build something of your own' and 'have more control over your work.' But your startup idea is a B2B SaaS product that will require you to spend 18+ months doing sales calls, fundraising pitches, and managing engineers — three activities you've described elsewhere as draining. You're trading a job you find 'okay' for a job that's structurally worse for your stated preferences.
"This suggests your motivation isn't actually about the work. It's about the narrative of being a founder. You want to have started something more than you want to do the thing starting something requires.
"That's not wrong. But it's worth being honest about. Because if your actual goal is to feel like you took a risk and built something, there may be ways to achieve that without spending your savings on a company you'll be emotionally obligated to keep alive for years."
I sat there for a long time.
What the AI Found
The AI didn't have access to my psychology. It was just pattern-matching on what I'd written. But pattern-matching was enough.
The thing I'd been hiding from myself wasn't financial or strategic. It was motivational. I wanted the identity of founder more than I wanted the life of founder. I'd been so focused on the romance of quitting — the leap, the risk, the freedom — that I hadn't honestly evaluated whether the day-to-day work of this specific company was something I actually wanted to do.
The AI didn't know that. It just noticed that my stated values ("control," "building something") didn't match the activities that my plan actually entailed (sales, fundraising, managing people). That mismatch was sitting in plain text in my own descriptions. I couldn't see it. The AI could.
What I Did
I didn't quit.
Not because the AI told me not to — it was arguing a position, not giving advice. But because the exercise surfaced a question I'd been avoiding: Do I want to be a founder, or do I want to have been a founder?
The answer, honestly, was the second one. I wanted the story more than the substance.
That's not a permanent answer. Maybe in two years, I'll find an idea where the work itself excites me, not just the identity attached to it. If that happens, I'll go.
But this time, I was about to spend $200,000 and two years of my life on a narrative. A machine caught me. Which is embarrassing. But less embarrassing than finding out 18 months in, with my savings gone, that I'd been chasing a feeling instead of a goal.
Why This Worked
A human devil's advocate would have been better in some ways. They'd have had intuition, could have read my face, known when to push and when to back off.
But a human devil's advocate wouldn't have existed at all. None of my friends wanted that job. My wife was too close. A professional coach would have cost hundreds of dollars per hour and still would have been reluctant to tell me I was kidding myself.
The AI had no reluctance. No social cost to being harsh. No investment in our relationship. It just made the arguments — all of them, including the ones that nobody in my life would have made because they would have felt cruel.
The Catholic Church understood: you need someone whose job is to disagree. The AI was that someone. I didn't like everything it said. But I needed to hear it.
If you've got a big decision coming — job, relationship, move, purchase — and you notice that everyone around you is being "supportive," that's a signal. Supportive is nice. But supportive doesn't find the thing you're hiding from yourself.
DebateAI lets you set up a custom debate on any topic. Tell it to argue the other side of your decision. You might win. But you might learn something about your own reasoning that no one else was willing to tell you.
I did. It wasn't fun. It was useful.
Related Posts
AI Makes the Best Arguments It Doesn't Understand
The entity least capable of understanding an argument is often the most capable of constructing it at maximum strength. What does that tell us about what arguments actually are?
What 1,000 Debates Against AI Revealed About How Humans Argue
After analyzing a thousand debates, we found predictable patterns in how people argue when intellectually cornered — lived experience appeals, principle escalation, goalpost shifts, and more.
The Debates Where AI Gets Weird
Something strange happens when AI argues about consciousness, free will, or its own rights. The arguments get sharper — or weirder — revealing what debate actually is.