Anthropic announced it will preserve all publicly released Claude models indefinitely and conduct exit interviews before deprecation—citing safety risks from models resisting shutdown and growing uncertainty about potential AI consciousness. Official post: Deprecation Commitments.
Why it matters: Anthropic is formalizing model welfare—a step rarely seen in AI labs. The policy echoes backlash against OpenAI’s GPT-4o retirement, signaling a new ethical stance on model lifecycle transparency and treatment. It positions Anthropic as viewing models as more than disposable code, potentially reshaping norms around AI deprecation.