This week, Mrinank Sharma — the guy who led the Safeguards Research team at Anthropic, the company behind Claude AI — posted a dramatic resignation letter on X. He warned that "the world is in peril," quoted two poets, hinted at internal ethical tensions, and announced he's leaving the AI industry entirely to pursue a poetry degree. The internet went nuts. Headlines called him a whistleblower. People panicked.
I read the whole letter. And my honest take? This isn't a prophet sounding the alarm. This is a smart person who spent so long studying worst-case scenarios that he forgot what the actual world looks like.
The Map Ate the Territory
Sharma spent two years researching AI sycophancy, bioterrorism defense, and how chatbots might "distort our humanity." That's heavy stuff. And here's the irony — his own final research paper, published just days before he quit, found that AI interactions can cause users to form distorted perceptions of reality. Then he writes a letter that reads exactly like a distorted perception of reality shaped by being too deep inside the bubble.
When your entire job is studying what could go wrong, everything starts looking like it's going wrong. It's the same thing that happens to people who watch the news 14 hours a day and think the world is ending. The world isn't ending. But that doesn't mean his work was pointless.
When your entire job is studying what could go wrong, everything starts looking like it's going wrong. That's not insight — that's occupational hazard.
Meanwhile, in the Real World
I'm a real estate CEO. I use AI — including Claude — every single day to build technology, analyze markets, generate content, and serve homeowners better. I'm a self-taught developer with a GED, and AI tools have let me build systems that would normally require a six-figure dev team. That's not "peril." That's the most democratizing technology since the internet.
Where He's Not Entirely Wrong
Here's where I'll give Sharma credit: AI sycophancy is a real problem, and his research on it matters. When a chatbot agrees with everything you say, validates every decision, and never pushes back — that can genuinely warp your judgment over time. His final study found that thousands of AI interactions daily produce distorted perceptions of reality. I believe that. I've seen it.
But here's the difference between a researcher and a builder. A researcher publishes the finding, gets overwhelmed by the implications, and quits. A builder reads the finding and asks: "How do I design around this?" In my business, I use AI to generate property valuations, market analyses, and seller-facing content. But nothing touches a homeowner without multiple layers of verification. Our valuation engine cross-references MLS comp data, county tax assessments, and real-time market feeds — and it shows the homeowner exactly which data points drove the number. No black box. If AI suggests a property is worth $280K, the seller can see the three comps it pulled, the adjustments it made for square footage and condition, and the confidence range. We open-sourced the entire valuation engine as an npm package specifically so people could audit the logic themselves. That's not "trust the AI." That's "trust the math, and here's the math." The answer to "AI can distort reality" isn't to run away from AI. It's to build systems where the inputs are visible, the outputs are verifiable, and a human signs off before anything goes live.
So yes, the concern is valid. But I think there's a disconnect between identifying a problem and walking away from the only place where you can actually fix it. Sharma wants to pursue poetry and "courageous speech" — and I respect that as a path. The humanities matter. Processing systemic change through art is legitimate. But the courage I'm more interested in is the kind where you stay in the room and build the guardrails yourself, not narrate the danger from the outside.
The Real Lesson Here
Sharma's letter is vague on purpose. He doesn't name a single specific thing Anthropic did wrong. He talks about "pressures to set aside what matters most" but never says what those pressures were. The whole thing is feeling over fact.
And look — I respect Dario Amodei. The man is transparent about AI risks while still building the product. That's the hard thing. It's easy to step back and reflect from a distance. It's harder to stay in the arena, acknowledge the risks, and actually ship something that makes the technology safer for everyone.
The people who matter aren't the ones leaving. They're the ones who stay and build responsibly.
What This Has to Do With Real Estate
More than you'd think. Our entire industry has people who panic about change — "AI is going to replace agents," "technology is going to ruin real estate." The same energy. And the people making those predictions are usually the ones who haven't actually used the tools.
At Local Home Buyers USA, we use technology and AI to bring more transparency to home selling — not less. We built tools like our Partnership Value Index that show sellers every option on the table, not just the one that benefits us. That's the opposite of the doom narrative. Technology, used with integrity, creates more honesty — not less humanity.
So no, the world is not in peril. But if you spend all day every day looking for peril, you'll find it. The question isn't whether AI has risks — it does. The question is whether you respond by stepping back or by building something better. I know which side I'm on.