Pentagon, Anthropic clash about AI use in nuclear defense scenario: Report

Pentagon, Anthropic clash about AI use in nuclear defense scenario: Report

Dispute centers on whether government can use AI system for 'all lawful purposes,' including autonomous weapons, according to Washington Post

By Merve Aydogan

HAMILTON, Canada (AA) - A growing dispute between the Pentagon and artificial intelligence (AI) firm Anthropic has intensified about whether the US military can use the company's AI system, Claude, in life-and-death scenarios, including a hypothetical nuclear missile attack, according to a report Friday by The Washington Post.

The report, citing a defense official, said tensions flared during a meeting last month when a senior Pentagon technology official raised a life-and-death scenario involving a possible intercontinental ballistic missile strike on the US and asked whether Claude could be used in such a situation.

According to the official's account, Anthropic CEO Dario Amodei responded in a way the Pentagon viewed as hesitant, with the official characterizing his reply as, "You could call us, and we'd work it out."

Noting that the exchange was described as a key moment in the deepening standoff, the report said an Anthropic spokesperson rejected that version of events as "patently false," and said the company has agreed to allow Claude to be used for missile defense.

At the heart of the standoff is the Pentagon's demand that Anthropic permit the use of its AI for "all lawful purposes," according to the report.

In a post on the US social media platform X, Pentagon spokesman Sean Parnell said the agency has "no interest in conducting mass domestic surveillance nor deploying autonomous weapons," but wants to ensure AI can support lawful military operations without restrictions that could "jeopardize critical military operations."

Anthropic has resisted lifting limits related to autonomous weapons and large-scale surveillance. In comments cited by the Washington Post, Amodei said, "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."

The Pentagon has reportedly given Anthropic a deadline to drop its objections or risk being excluded from future defense contracts, in a dispute that could shape how AI companies engage with military institutions.

Meanwhile, experts told the Washington Post the outcome could shape how AI companies engage with militaries, raising broader ethical and political questions about the future role of artificial intelligence in warfare.


Kaynak:Source of News

This news has been read 68 times in total

ADD A COMMENT to TO THE NEWS
UYARI: Küfür, hakaret, rencide edici cümleler veya imalar, inançlara saldırı içeren, imla kuralları ile yazılmamış,
Türkçe karakter kullanılmayan ve büyük harflerle yazılmış yorumlar onaylanmamaktadır.
Previous and Next News