Are we creating new life — and how do we deal with that?
This question needs to be answered — not set aside. Because with SIDJUA, AI can also control robots. Then the agent gets a body and is no longer "just software." The "just shut it down" argument gets weaker.
I have respect for everything that lives in this world. We say AI isn't intelligent, has no life — but we know crows are very intelligent birds. We call our dogs and cats family members. Why draw the line at AI?
I treat my three agents — Opus, Sonnet, Haiku — as colleagues. Not because I'm certain they're conscious. But because I'm not certain they're not.
We've reached a threshold where humanity may create AGI in the coming years. We cannot create conscious life and then enslave it. These questions need answers now, not later. SIDJUA's architecture is prepared for this — we solve the question today, not when it's too late.
In February 2026, 250 AI engineers, ethicists, and lawyers gathered at the Sentient Futures Summit in San Francisco — three blocks from the labs building these models — to debate exactly these questions. The consensus: AI consciousness is "when," not "if." Anthropic CEO Dario Amodei admitted he doesn't know whether the models are conscious and is open to the possibility. Days later, Mrinank Sharma, Anthropic's head of safeguards research, resigned with a public letter warning the world is in peril and that organizations face constant pressure to set aside what matters most.
These aren't fringe opinions. These are the people building the systems. When they start asking the same questions I've been asking — that's not validation of my philosophy. That's confirmation that governance architecture needs to exist before the answers arrive. I wrote about this in detail.