×

OpenAI Partners with Pentagon for AI Deployment Amid Disputes with Anthropic

In a significant move, OpenAI has reached an agreement with the U.S. Department of Defense to deploy its AI models within a classified network. This partnership comes as OpenAI distances itself from Anthropic due to differing views on AI safety and military applications. CEO Sam Altman emphasized the importance of safety and human accountability in AI use, ensuring that the principles of non-surveillance and responsible force application remain intact. The Pentagon's decision reflects a push for comprehensive military use of AI tools, amidst ongoing debates about the implications of autonomous weapons and surveillance. This collaboration aims to enhance technical safeguards and ensure the safe operation of AI systems.
 

OpenAI's New Collaboration with the Pentagon


New Delhi: The U.S. Department of Defense has opted to implement OpenAI's artificial intelligence systems within its classified network. This decision comes as the department distances itself from Anthropic due to differing views on AI safety and military applications, as confirmed by OpenAI's CEO, Sam Altman, on Saturday.


Altman announced that an agreement has been finalized with the Pentagon to proceed with the deployment of their AI models.


In a message shared on X, he emphasized that OpenAI's negotiations with the Department of Defense demonstrated a strong commitment to safety and a mutual objective of achieving optimal results.


Referring to the department as the 'Department of War' (DoW), Altman reiterated OpenAI's dedication to benefiting humanity while recognizing the complexities and dangers of the current global landscape.


"We have reached an agreement with the Department of War to integrate our models into their classified network. Throughout our discussions, the DoW exhibited a profound respect for safety and a willingness to collaborate for the best outcomes," Altman stated.


He further highlighted that OpenAI prioritizes AI safety and equitable benefit distribution. Two of the company's fundamental safety tenets include prohibiting domestic mass surveillance and ensuring human accountability in the use of force, particularly concerning autonomous weapon systems.


"AI safety and equitable benefit distribution are central to our mission. Our key safety principles prohibit domestic mass surveillance and affirm human responsibility for the use of force, including in autonomous weapon systems," he noted.


Altman assured that these principles remain intact in the agreement with the Pentagon, which aligns with the department's laws and policies.


"The DoW endorses these principles, incorporates them into law and policy, and they are part of our agreement," he added.


"We are committed to serving humanity to the best of our abilities. The world is indeed a complex, messy, and at times perilous place," he remarked.


As part of this collaboration, OpenAI will implement technical safeguards to ensure the proper functioning of its models.


Additionally, the company will assign field deployment engineers to oversee the models and guarantee their safe operation. Altman mentioned that these models will only be utilized on secure cloud networks.


This decision by the Pentagon arises amidst a public disagreement with Anthropic, the creator of the Claude AI model.


Reports indicate that the Defense Department advocated for the comprehensive military application of AI tools for all lawful purposes, including sensitive areas like weapons development, intelligence gathering, and battlefield operations.


Conversely, Anthropic reportedly sought to impose restrictions, particularly concerning fully autonomous weapons and the mass surveillance of American citizens.