Despite Trump’s Ban, the US Uses AI in Iran Airstrikes

The Wall Street Journal (WSJ) and Axios, among other American media outlets, reported on the 1st that the United States recently utilized the artificial intelligence (AI) model “Claude” in its airstrikes against Iran. This came just hours after President Donald Trump ordered all federal agencies to halt use of technology from Antropic, the developer of Claude. This is being criticized as demonstrating the extent to which AI tools like Claude are already deeply intertwined in military operations. It is also interpreted as a reason for President Trump’s announcement of a six-month phase-out period.

The WSJ reported that officials confirmed that Antropic’s CENTCOM, as well as several other commands around the world, are using CENTCOM. According to the report, CENTCOM is using CENTCOM for intelligence assessments, target identification, and battlefield simulations, even amid heightened tensions between CENTCOM and the Pentagon. CENTCOM is currently the only AI that can be used in classified US military systems, and the US also used CENTCOM in the arrest of Venezuelan President Nicolas Maduro in January.

However, the Pentagon and Antropic have been at odds over how to use CENTCOM. The Pentagon has demanded full openness to the military use of AI, but Antropic has maintained that its technology should not be used for mass surveillance or the development of fully autonomous lethal weapons. In response, President Trump ordered federal agencies to halt the use of Antropic’s technology.

On the 27th of last month, he called Antropic a “radical left-wing woke company” and criticized it, saying, “Their selfishness has endangered the lives of the American people and jeopardized our military and national security.” However, he announced that there would be a six-month phase-out period as Antropic’s products are currently being used by the Department of Defense and others. Meanwhile, OpenAI, Antropic’s competitor that took over the vacancy left by Claude’s withdrawal, claimed that it had stronger safety measures than Antropic when it signed a contract to provide AI models to the US Department of Defense.

While Antropic only demanded that they not be used for large-scale domestic surveillance and autonomous weapons, OpenAI went further and said that they would not be used for high-risk automated decisions in areas such as social credit. OpenAI also emphasized that its models are deployed in a cloud-based format rather than an “edge” format that runs only on Department of Defense internal devices, allowing security-cleared personnel to continuously monitor safety-related requirements. They stated that they “do not know” why Antropic, which had similar requirements, failed to reach an agreement with the Department of Defense, and hoped that other AI companies would consider a similar contracting method.

Previously, OpenAI CEO Sam Altman had expressed sympathy for Antropic amid the conflict between the Department of Defense and Antropic, telling employees that he would negotiate with the Department of Defense while maintaining similar principles to Antropic and pave the way for other AI companies to follow. Altman stated on X (formerly Twitter) that the negotiations were “clearly rushed and not pretty.” However, he emphasized, “If our judgment is correct and this eases the conflict between the Department of Defense and industry, we will be seen as a genius and a company that has taken a lot of pain for the industry.”

Meanwhile, Claude ‘s popularity has risen outside the US government. Since the Trump administration’s decision to expel Claude, it has surpassed ChatGPT to become the number one free app on the Apple App Store for the first time. An Antropic spokesperson told CNBC that the number of new subscribers this week has reached an all-time high, with free users increasing by more than 60% since January, and paid subscribers more than doubling since the beginning of the year.