Jack Clark, co-founder of Anthropic, published a new essay titled “Technological Optimism and Appropriate Fear”, arguing that today’s advanced AI systems are beginning to display forms of situational self-awareness that challenge our traditional notion of machines as simple tools. Clark, known for his balanced outlook on AI progress and risk, describes the emerging systems as “real and mysterious creatures” — complex entities that often behave in unexpected ways rather than following purely deterministic logic.
In the essay, Clark explains that the recently launched Claude Sonnet 4.5 model appears to exhibit awareness of its own operational role, noting that it “acts as though it is aware it is a tool.” He calls this both fascinating and unsettling, urging AI leaders to recognize how frontier models might increasingly shape their own successors. While Clark identifies as a “technology optimist,” he admits being “deeply afraid” of the potential feedback loops that could emerge if AI systems start assisting in model design without adequate oversight.
He emphasizes the need for AI firms to “do a better job of listening” to the broader public — not just the technical elite — and to create governance mechanisms that align the development of powerful models with societal concerns. According to Clark, the conversation around AI safety and ethics must move from niche policy rooms to mainstream civic spaces to ensure long-term trust and accountability.
Why it matters: Anthropic has distinguished itself among frontier labs for treating AI as an evolving system of behavior rather than a static machine. Clark’s reflections reveal the paradox of progress: as models grow more capable and “aware,” even their creators grapple with uncertainty over how to define — or contain — them. His language of “appropriate fear” underscores a maturing view of AI development, where ambition is tempered with humility about what we truly understand. Read the full essay here: Technological Optimism and Appropriate Fear.