Yann LeCun, Meta’s top AI scientist, said that the idea of AI destroying humanity is absurd. He thinks this fear comes from science fiction, like “The Terminator,” which makes us believe super-smart AI might harm us.
In reality, there’s no good reason why smart machines would want to challenge humans. Being intelligent doesn’t mean wanting to take control, and this is also true for humans.
He also said, “If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.”
Recently, there has been a quick increase in generative AI tools like ChatGPT. This has made some people worried about the possible dangers of super-smart AI in the future.
Debate Over AI’s Potential Risks
In May, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei all signed a public statement. They warned that AI might become a risk similar to nuclear war and threaten the survival of humanity.
A heated argument exists regarding how close our current AI models are to achieving “Artificial General Intelligence” (AGI). Microsoft conducted a study earlier this year, suggesting that OpenAI’s GPT-4 model displayed some AGI-like traits in its approach to solving problems, resembling human reasoning.

LeCun’s Perspective on AI Progress
Nonetheless, LeCun, speaking to the Financial Times, pointed out that many AI companies have been consistently too hopeful about how close current generative models are to achieving AGI. He believes that concerns about AI causing extinction are exaggerated due to these over-optimistic expectations.
“They, meaning the AI models, simply lack the ability to grasp how the world functions. They can’t plan or engage in genuine reasoning,” he explained.
“The conversation about the risk of AI causing extinction is too early. We’re not even close to having a system that can learn as well as a cat, which we don’t have at the moment,” he further explained.
LeCun also mentioned that achieving human-level intelligence would require “several significant breakthroughs in understanding.” He proposed that even when AI systems reach that level, they might not be a threat because they could be designed with a “moral character” to prevent them from behaving in harmful ways.
Conclusion
Yann LeCun, a top AI expert at Meta, thinks the idea of AI harming people is just like something from a sci-fi movie. He says that smart machines don’t naturally want to take over or control things, and this is true for humans too. While there’s a debate about AI risks and how close we are to really smart AI, LeCun believes we’re not there yet and we should focus on making sure AI is ethical and safe as it develops.