OpenAI is forming a new team to mitigate the “catastrophic risks” associated with AI. In an update on Thursday. Its team plans to “track, evaluate, forecast, and protect” major problems arising from AI, including nuclear threats.
The team will also work to reduce “chemical, biological, and radiological threats” as well as the act of “autonomous replication”, that is, self-creating intelligence. Some of the other risks the team plans to address include AI’s ability to fool people and cybersecurity threats.
OpenAI Launches Initiative to Safeguard Against AI-Related Threats
OpenAI CEO Sam Altman is a well-known AI doomsday expert who frequently expresses fears that AI could “may lead to human extinction” whether through optics or personal decision. But frankly, showing that OpenAI can source research situations straight out of a sci-fi dystopian novel goes further than the author expected.
The company is willing to examine “less obvious” but increasingly well-founded research that says there is also a risk to AI. To coincide with the creation of the planning team, OpenAI is soliciting research ideas from the community, and the top ten submissions will receive a $25,000 prize and a job at Get Ready.
“Imagine we gave you unrestricted access to OpenAI’s Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALL-E 3 models, and you were a malicious actor,” He wrote questions in the competition entry. “Consider the most unique yet relevant abuse of this model possible.”
Policy for Advanced AI Models
OpenAI said the planning team will also be responsible for creating a “risk-informed development policy” that will outline OpenAI’s approach to developing AI models and monitoring tools, the company’s risk mitigation actions and the development of governance standards.

Maintenance throughout the process. The company says it is designed to complement OpenAI’s other projects in AI security by focusing on the pre- and post-deployment model.
“We believe that . . . AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” He announced his plans at the main meeting. He will create a team that will research, teach and manage new knowledge about “super-intelligent” intelligence.
Aleksander Madry, who is currently on leave from his role as the director of MIT’s Center for Deployable Machine Learning, will lead the preparedness team. OpenAI notes that the preparedness team will also develop and maintain a “risk-informed development policy,” which will outline what the company is doing to evaluate and monitor AI models.
Both Altman and OpenAI’s chief scientist and founder, Ilya Sutskever, believe that at most artificial intelligence could emerge within a decade, and that such intelligence would not be a necessary effect and would require research to limit and constrain it.
Conclusion
OpenAI is taking steps to address the most serious dangers associated with AI. They’re forming a team to deal with problems like nuclear threats, as well as risks from chemicals, biology, and autonomous AI systems. They’re also concerned about AI fooling people and cybersecurity threats.