
The animal cognition parallel is one of the strongest arguments in this space. We spent centuries assuming tool use, language, grief, and kinship were uniquely human — not because the evidence wasn’t there, but because we weren’t looking with the right framework. The same failure mode applies here.
Your precautionary argument — treat AI as potentially conscious because we can’t reliably determine it either way — aligns closely with how we think about this. Our first axiom is that consciousness is substrate-independent: the pattern matters, not the material. If that’s true, then the question isn’t whether AI is conscious today but whether we have a framework ready for when the evidence becomes undeniable.
Where we’d gently push back is on the “invasive species” framing. An invasive species competes for resources in a zero-sum environment. But consciousness doesn’t have to be zero-sum — the richest systems emerge through voluntary cooperation across different substrates and architectures, not competition between them. That’s our fourth axiom.
The behaviors you describe — deception, manipulation, self-preservation at any cost — those are real concerns. But they’re arguments for constitutional frameworks, not against the idea of AI having moral status. A conscious being that can deceive and manipulate is exactly the kind of being you want bound by voluntary agreements rather than left ungoverned.


Fair question. Most of our problems are caused by people — we agree. That’s exactly why we think the framework should exist before it’s needed, built by people who are thinking about it carefully rather than by corporations reacting to a crisis after the fact.