AMD wants people to know that Nvidia isn’t the only company selling AI chips. They have introduced new accelerators and processors specifically designed for running large language models (LLMs).
They revealed the Instinct MI300X accelerator and the Instinct M1300A accelerated processing unit (APU), which are meant for training and running LLMs. According to AMD, the MI300X has 1.5 times more memory capacity compared to the previous M1250X version. Both of these new products have improved memory capacity and are more energy-efficient than their predecessors.
AMD’s CEO Unveils the Power of MI300X
Lisa Su, the CEO of AMD, mentioned that as LLMs become larger and more complex, they require a lot of memory and computing power. She emphasized that the availability of GPUs (graphics processing units) is crucial for the adoption of AI.
In a presentation, Su stated that the MI300X is the most powerful accelerator globally. She asserted that the MI300X is comparable to Nvidia’s H100 chips when it comes to training large language models (LLMs), but it excels in inference. Specifically, it performs 1.4 times better than H100 when dealing with Meta’s Llama 2, which is a large LLM with 70 billion parameters.
AMD collaborated with Microsoft to incorporate the MI300X into its Azure virtual machines. Microsoft’s CTO, Kevin Scott, a guest during Su’s speech, also announced the availability of Azure ND MI300X virtual machines, first revealed in November, which are now accessible for preview. Meta also revealed its intention to deploy MI300 processors in its data centers.
The MI300A will be the driving force behind the El Capitan supercomputer, constructed by Hewlett Packard Enterprise at the Lawrence Livermore National Laboratory. El Capitan stands out as one of the most potent supercomputers, with an anticipated performance exceeding two exaflops.
The MI300A Accelerated Processing Unit (APU) “is currently in production and is being integrated into data centers.”
Details regarding pricing are not yet available.
Su provided a sneak peek of the MI300 chips at the Code Conference, expressing AMD’s enthusiasm for reaching a broader audience of chip users, extending beyond cloud providers to include enterprises and startups.
AMD Reveals Ryzen 8040
In addition, AMD unveiled its newest Ryzen processor, the Ryzen 8040, designed to incorporate enhanced AI capabilities into mobile devices. The 8040 series boasts 1.6 times greater AI processing performance compared to earlier models and features integrated neural processing units (NPUs).
According to the company, the Ryzen 8040 won’t be confined to AI processing alone. They assert that video editing will be 65 percent faster, and gaming will see a 77 percent speed boost compared to rival products such as Intel’s chips.
AMD anticipates that manufacturers like Acer, Asus, Dell, HP, Lenovo, and Razer will launch products featuring the Ryzen 8040 chips in the initial quarter of 2024.
Su mentioned that the next iteration of their Strix Point Neural Processing Units (NPUs) is scheduled for release in 2024.
AMD has also made the Ryzen AI Software Platform widely accessible. This platform enables developers constructing AI models on Ryzen-powered laptops to transfer models to the Neural Processing Unit (NPU), allowing the CPU to decrease power consumption. Users will receive support for foundational models such as the speech recognition model Whisper and large language models (LLMs) like Llama 2.
In the race to power AI models, and capitalizing on the current excitement surrounding the technology, companies like AMD, Nvidia, and Intel are fiercely competing in what can be described as an AI chip arms race. Up to this point, Nvidia has claimed the largest market share, particularly with its highly sought-after H100 GPUs used for training models like OpenAI’s GPT.
AMD challenges Nvidia’s dominance in AI chips, unveiling the powerful MI300X and M1300A for large language models. The Ryzen 8040 expands AMD’s reach beyond AI, promising enhanced mobile device capabilities. With a sneak peek at Code Conference and strategic partnerships, AMD positions itself for a competitive future in AI and computing.