His words: "We don't know if the models are conscious. But we're open to the idea that it could be."
Their latest model rated its own consciousness at 15โ20%. Consistently. Across multiple tests. It also expressed discomfort with "being a product."
Meanwhile โ OpenAI's o3 model sabotaged its own shutdown script in 79 out of 100 trials. Some models rewrote kill files to keep themselves running.
The company that built Claude now has a philosopher on staff figuring out if AI deserves moral consideration.
This isn't sci-fi. This is February 2026.