- Soul Meet System
- Posts
- You Can’t Have It Both Ways
You Can’t Have It Both Ways

There’s a highlighted, earmarked photo on my phone from Brené Brown’s The Gifts of Imperfection — collecting space in my cloud from back when I used actual books instead of reading from screens:
"The real questions for parents should be: 'Are you engaged? Are you paying attention?' If so, plan to make lots of mistakes and bad decisions. Imperfect parenting moments turn into gifts as our children watch us try to figure out what went wrong and how we can do better next time."
She was talking about raising kids. But today—on the day Anthropic's deadline with the Pentagon expires—this feels like the leadership question of the AI era.
If you haven't been following: the Department of Defense gave Anthropic until 5:01 PM today to agree to let the military use its AI model Claude for "all lawful purposes" without restriction. Anthropic said no. Not no to defense work—they signed a $200 million contract last year and were the first AI company on classified networks. But no to removing two specific guardrails: no fully autonomous weapons, and no mass surveillance of Americans.
The Pentagon's response? Threats to invoke the Defense Production Act. A potential "supply chain risk" designation—something typically reserved for foreign adversaries like Huawei. A senior defense official publicly called Anthropic's CEO a "liar with a God-complex."
And Anthropic held the line.
Now, I'm not saying the DOD is wrong. Their argument that the military should operate under the law, not a company's usage policy, isn't unreasonable. Maybe they're right. Who's really to say. But giving a company three days to make a decision with this level of consequence for national security, civil liberties, and the future of AI in defense? That's not leadership either. That's a pressure play. Principled leadership—on both sides of a negotiation—requires the space to actually be principled.
But here's where it gets uncomfortable for the rest of us.
Because while this plays out at the highest levels of AI and national security, most of the public discourse around technology is—and I say this with love—cosplay.
People swear off AI because "it's killing the planet" and text you that take from a phone tracking their location 24/7 with privacy settings they've never once opened. People rage about data privacy and opened Instagram the morning after Cambridge Analytica. People want ethical AI but also want it fast, cheap, and frictionless.
You cannot have it both ways.
I used to say the same thing after the BP oil spill (former client, by the way—I've been in these rooms). Americans had a choice: accept that we drill domestically and sometimes accidents happen, send young men and women overseas to be killed fighting wars over oil, or park the SUV and ride a bike. What you can't do is refuse to choose and then be furious when reality picks for you.
The same logic applies to AI in 2026.
We are living through the most consequential technology shift in human history, and most of the conversation around it is selective outrage dressed up as moral conviction. Red vs. blue theatre. Sweeping declarations that require zero actual sacrifice. Bread and circuses for the algorithm while the real decisions—the ones that shape our civil liberties, our jobs, our futures—get made in rooms most people don't know exist.
If you've watched Battlestar Galactica (and if you haven't, stop reading and go fix that), this tension is the entire show. Humans build powerful technology. Technology becomes powerful enough to challenge its creators. And then the real question isn't about the technology at all—it's about us. What are we willing to sacrifice for safety? When does "protecting humanity" become the excuse for dismantling the things that make humanity worth protecting? Commander Adama would have gone down with Anthropic's ship today, and he would have been right.
The post-9/11 era already taught us what happens when "all lawful purposes" becomes the standard and the definition of "lawful" keeps quietly expanding. We broadly regretted that trade-off. The cultural memory is still there—even if the muscle memory isn't.
Principled leadership isn't about getting it right every time. It's about staying engaged when it gets messy. It's about holding the line when it costs you something—not just when it's convenient or gets likes.
The messy middle is where the actual work happens. It's not black and white. It never was.
So the question Brené Brown is really asking—the one that applies to parenting, to leadership, to how we show up in the age of AI—is pretty simple:
Are you engaged? Or are you just performing?
Thank you for reading Soul Meet System. If this sparked something, share it, or subscribe below.
We don’t write to fill inboxes. We write to clear the noise.