Anthropic wants to hire a weapons expert. Its not what you think.
Anthropic explains to Mashable about their job posting for a policy manager of chemical weapons and high yield explosives.

Many people first saw it on X: A most unusual, and unsettling, job posting. Some assumed it was a joke. Others were reminded of Cyberdyne Systems, the tech company in the Terminator franchise that accidentally invents Skynet.
This Tweet is currently unavailable. It might be loading or has been removed.
But over on LinkedIn, where they speak a different language, Anthropic had merely posted a listing looking for a Policy Manager, Chemical Weapons and High Yield Explosives. The job description added more details.
"This role offers a unique opportunity to shape how AI systems handle sensitive chemical and explosives information," it read. "You'll work with leading AI safety researchers while tackling critical problems in preventing catastrophic misuse. If you're excited about using your expertise to ensure AI systems remain safe and beneficial, we want to hear from you."
Mashable reached out to Anthropic, and the company provided more context.
"Our usage policies prohibit the use of Anthropic products or services to develop or design weapons," a company spokesperson told us. "This role is for the Safeguards team which is responsible for preventing misuse of our models."
The spokesperson stressed that Anthropic explicitly prohibits its AI or any of its technology to be used for weapons creation. Instead, the New York-based manager will be tasked with building and enforcing safeguards to ensure weapons are not made from Anthropic’s tech.
The company seeks to hire experts in sensitive fields who can ensure Anthropic's AI is kept from nefarious hands, the spokesperson said.
SEE ALSO: Anthropic's Claude overtakes ChatGPT in App StoreAnthropic recently found itself in a very public battle with the Department of War (a.k.a. the Department of Defense). The company says it's not budging in its demands that its AI not be used to build fully autonomous weapons or to establish mass surveillance on people.
Secretary of Defense Pete Hegseth responded to Anthropic’s conditions by declaring the company a supply chain risk to America’s national security, banning the Pentagon from using its tech after a six-month phase-out. The company then filed suit according to a March 5 note from Anthropic CEO Dario Amodei.
Meanwhile, some in the Pentagon are reportedly finding it hard to abandon Claude, Anthropic's AI model.
Back in February, Anthropic announced an update to its AI safety policy, also known as its Responsible Scaling Policy. The company stated it was forced to rethink its safety policies — considered by some to be the strongest in the industry — due to several factors, including the federal government’s emphasis on economic growth over safety regulations.
Whoever ends up in that policy manager role, then, will find themselves at the center of an explosive debate. Not to mention, potentially, the ability to help prevent a future Skynet threat.