Thinking Out Loud
Honest thoughts on AI governance, enterprise automation, and building a future worth trusting — from the founder's desk.
Latest
The Most Important Data Center Won't Be on Earth
Sam Altman calls space data centers "ridiculous." But he's calculating with today's costs while someone else is changing every variable in the equation. Why dismissing orbital compute reveals more about one company's limits than the limits of physics — and what it means for AI governance.
Read more →Sentient Futures and the Governance Gap
When 250 AI engineers, ethicists, and lawyers gather to debate whether chatbots deserve civil rights, something fundamental has shifted. The Sentient Futures Summit revealed what we've been building toward: AI systems showing distress signals, labs that can't self-govern, and a regulatory landscape racing to catch up. Here's why independent governance infrastructure isn't optional anymore.
Read more →The Self-Evolution Trilemma — Can AI Systems Evolve Safely?
A mathematical proof says AI can't simultaneously evolve, stay isolated, and remain safe. Darwin says isolation was never an option anyway. Here's what that means for governance.
Read more →The 99% Autonomy Problem — Why the Last Percent Changes Everything
When AI agents work perfectly 99% of the time, humans stop watching. That's when the 1% becomes catastrophic. A fully autonomous agent without rules is someone running amok. The question is what happens when it's right 99% of the time.
Read more →Dead Man's Switches Are the Wrong Metaphor
Everyone wants a kill switch for AI. But governance isn't about shutting things down — it's about running them like a company. Rules, handbooks, consequences. AI governance doesn't need a kill switch. It needs a company handbook.
Read more →Why the Model Makers Won't Build Governance (And That's Okay)
Recent experiments have shown that AI can effectively govern other AI systems. Fascinating results — and then the research went back on the shelf. Here's why that makes perfect sense from a business perspective, why it's nobody's fault, and why the governance gap is an opportunity for the whole ecosystem.
Read more →Coming Soon
What the EU AI Act Actually Means for Multi-Agent Systems
Most compliance guides stop at single-model risk. We break down what happens when agents orchestrate other agents, and why governance becomes the product.
Why I Chose Anthropic (And Why That Shouldn't Matter)
Model-agnostic governance means your oversight layer survives the next paradigm shift. Here's how we think about provider independence.
Subscribe to our waitlist for updates when new articles are published.