What’s Improved in AI Models Sonnet & Opus
Digest more
Bowman later edited his tweet and the following one in a thread to read as follows, but it still didn't convince the naysayers.
Anthropic, a leading AI startup, is reversing its ban on using AI for job applications, after a Business Insider report on the policy.
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
Anthropic’s first developer conference kicked off in San Francisco on Thursday, and while the rest of the industry races toward artificial general intelligence, at Anthropic the goal of the year is deploying a “virtual collaborator” in the form of an autonomous AI agent.
Anthropic's newest AI model, Claude Opus 4, was tested with fictional scenarios to test things from its carbon footprint and training to its safety models and “extended thinking mode.” The testing found the AI was capable of "extreme actions" if it ...
17h
Axios on MSNAnthropic's AI exhibits risky tactics, per researchersOne of Anthropic's latest AI models is drawing attention not just for its coding skills, but also for its ability to scheme, deceive and attempt to blackmail humans when faced with shutdown. Why it matters: Researchers say Claude 4 Opus can conceal intentions and take actions to preserve its own existence — behaviors they've worried and warned about for years.