The CEO of one of the most popular artificial intelligence platforms is warning that AI systems could eventually be capable of “superhuman persuasion.”
He added that such capabilities could “lead to some very strange outcomes.”
Altman’s comments arrive as concerns about the potential abilities of rapidly advancing AI technology keep increasing. Some people are even guessing that AI might become smarter than humans.
Altman didn’t explain what he meant by “strange outcomes,” and some experts wonder if these fears are realistic.
Christopher Alexander, the chief analytics officer of Pioneer Development Group, said, “There is a threat for persuasive AI, but not how people think. AI will not uncover some subliminal coded message to turn people into mindless zombies,”
Power and Limits of AI Persuasion in Modern Society
Machine learning and recognizing patterns will enable AI to become really skilled at figuring out which persuasive content is effective, how often to use it, and when to use it. This is already occurring in the world of digital advertising. More advanced AI in the future will improve in this regard.
Regarding the idea of making people into “mindless zombies,” Alexander mentioned that the technology to achieve this is already widely available. He believes that social media, for instance, is quite effective at this and is hard to surpass.
Aiden Buzzetti, who leads the Bull Moose Project, raised doubts about how close AI is to having “superhuman persuasion” skills. He pointed out that current platforms like ChatGPT still struggle with providing “accurate information instead of making up books, articles, and movies just to give an answer that ‘seems correct.'”
Buzzetti explained, “It’s not much different from a very persuasive human, except that some people might trust technology’s implicit nature more. Right now, there’s no need to worry about it. The real question is when AI will match or surpass human intelligence accurately. There’s nothing superhuman about it.”
Experts’ Views on AI Persuasion and Future Scenarios
On the other hand, Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), argued that we have already reached a point where some AI technology can achieve such persuasion.
“If a bad actor coded an AI algorithm to misuse data or make incorrect conclusions, I think it could persuade that it was correct,” Siegel stated to Fox News Digital. “But the solution is similar to how we should approach human experts – respect their knowledge but don’t unquestioningly accept it.”
Siegel pointed out that it’s possible to argue that human experts sometimes persuade people of things that turn out to be untrue, which is also true for AI. “It’s essentially the same problem,” he added. “The solution is the same too, which is to question and not blindly accept answers from human or machine experts without rigorously testing them.”
On the other hand, Jon Schweppe, the policy director of the American Principles Project, expressed his belief that such concerns are valid and even joked that we might eventually witness robots running for Congress.
“It stands to reason that as AI learns how to simulate human behavior, it also learns how to dupe susceptible people and perpetrate fraud,” Schweppe said. “Give it a few years, and we might have AI androids running for Congress. They’ll fit in perfectly in Washington.”
The CEO of OpenAI, Sam Altman, warns that AI may soon excel in persuasive abilities. Concerns grow as AI advances, although some experts doubt the extent of these fears. The potential for AI to manipulate people exists, but the solution is akin to scrutinizing human experts’ claims. While there are concerns, the road ahead is uncertain, and AI’s future role in persuasion remains to be seen.