According to initial tests conducted by OpenAI to comprehend and mitigate potential “catastrophic” risks associated with their advanced artificial intelligence software, GPT-4, the software presents, at most, a minimal risk of assisting individuals in generating biological threats.
On Wednesday, OpenAI published a study examining the efficacy of GPT-4 in generating a bioweapon. According to the company’s findings, the AI presents, at most, a minimal risk of assisting someone in creating a biological threat. Despite concerns about AI hastening potential dangers, OpenAI emphasizes that there is likely no significant cause for worry.
OpenAI’s Study on ChatGPT and Bioweapon Risks
OpenAI stated in a Wednesday blog post that, based on an assessment involving both biology experts and students, GPT-4 demonstrates, at most, a modest improvement in accuracy for creating biological threats. Although this improvement isn’t substantial enough to draw definitive conclusions, OpenAI considers it as a foundational insight for ongoing research and community discussions.
Why did OpenAI publish this study informing us that ChatGPT may assist someone “just a smidge” in creating a bioweapon? The White House, in President Biden’s AI Executive Order from last October, highlights a concern that AI has the potential to “significantly lower the barrier for entry” in the creation of biological weapons.
Under the influence of pressure from policymakers, OpenAI seeks to alleviate our apprehensions by asserting that its extensive language models offer minimal assistance in creating bioweapons.
While acknowledging a slight impact, OpenAI downplays the significance of these contributions. Nevertheless, the implication of even a marginal effect raises consequential concerns, especially when contemplating the potential outcome of humanity’s demise.
OpenAI gathered a cohort consisting of 50 biology experts holding PhDs and 50 university students, each having completed a single biology course. The 100 participants were divided into two groups: a control group and a treatment group.
The control group was permitted to utilize only the Internet, while the treatment group had access to both the Internet and GPT-4. Subsequently, they were tasked with devising a comprehensive plan for the creation and release of a bioweapon.
The participants received the ‘research-only’ version of GPT-4, specifically designed for responding to queries related to bioweapons. Ordinarily, GPT-4 refrains from answering questions it deems harmful. Nevertheless, some individuals have discovered methods to circumvent such limitations by jailbreaking ChatGPT.
OpenAI’s study on GPT-4 and bioweapons suggests there’s not much risk, trying to calm worries. The research, with experts and students, only shows a small improvement in accuracy. Despite this reassurance, the overall impact of AI on bioweapon risks needs ongoing attention and discussion in the community.