OpenAI has decided to lift the ban on the military using ChatGPT and its other AI tools without making much noise about it. This change in stance is happening as OpenAI starts collaborating with the U.S. Department of Defense on AI projects, like open-source cybersecurity tools.
OpenAI Reverses Military Ban on ChatGPT
Until at least Wednesday, OpenAI’s policies page clearly stated that the company prohibited the use of its models for activities with a high risk of physical harm, such as developing weapons or engaging in military and warfare.
However, OpenAI has now removed the explicit mention of the military from its policy. Even though the specific reference to the military is gone, the policy still said that users should not use the service to cause harm to themselves or others, including the development or use of weapons.
Anna Makanju explained, “Since we used to have a broad ban on military use, many people believed it would restrict numerous cases that align with our desired outcomes in the world.”
This development follows years of controversy surrounding tech companies creating technology for the military. The concerns, particularly from tech workers involved in AI, have been brought to light. Employees at almost every major tech company working on military projects have expressed their worries. This became evident when thousands of Google employees protested against Project Maven, a Pentagon initiative that planned to use Google AI for analyzing drone surveillance footage.
Workers at Microsoft objected to a $480 million contract with the army, which aimed to supply soldiers with augmented-reality headsets. Additionally, over 1,500 employees from Amazon and Google joined forces by signing a letter of protest against a combined $1.2 billion, multiyear contract with the Israeli government and military. This agreement involved the tech giants providing cloud computing services, AI tools, and data centers.
OpenAI recently allowed the military to use ChatGPT and other AI tools, a move linked to their collaboration with the U.S. Department of Defense. They changed their rules, causing concerns about potential harm despite removing direct military mentions. This shift echoes industry-wide controversies, with tech employees worrying about AI use in military projects, reminiscent of past protests at Google and Microsoft.