Amazon unveils new chips for training and running AI models


    There’s a shortage of GPUs as the demand for generative AI, which is often trained and run on GPUs, grows. Nvidia’s best-performing chips are reportedly sold out until 2024. The CEO of chipmaker TSMC was less optimistic recently, suggesting that the shortage of GPUs from Nvidia — as well as from Nvidia’s rivals — could extend into 2025.

    To lessen their reliance on GPUs, firms that can afford it (that is, tech giants) are developing — and in some cases making available to customers — custom chips tailored for creating, iterating and productizing AI models. One of those firms is Amazon, which today at its annual re:Invent conference unveiled the latest generation of its chips for model training and inferencing (i.e. running trained models).

    The first of two, AWS Trainium2, is designed to deliver up to 4x better performance and 2x better energy efficiency than the first-generation Trainium, unveiled in December 2020, Amazon says. Set to be available in EC Trn2 instances in clusters of 16 chips in the AWS cloud, Tranium2 can scale up to 100,000 chips in AWS’ EC2 UltraCluster product.

    One hundred thousand Trainium chips delivers 65 exaflops of compute, Amazon says — which works out to 650 teraflops per a single chip. (“Exaflops” and “teraflops” measure how many compute operations per second a chip can perform.) There’s likely complicating factors making that back-of-the-napkin math not necessarily incredibly accurate. But assuming a single Tranium2 chip can indeed deliver ~200 teraflops of performance, that puts it well above the capacity of Google’s custom AI training chips circa 2017.

    Amazon says that a cluster of 100,000 Trainium chips can train a 300-billion parameter AI large language model in weeks versus months. (“Parameters” are the parts of a model learned from training data and essentially define the skill of the model on a problem, like generating text or code.) That’s about 1.75 times the size of OpenAI’s GPT-3, the predecessor to the text-generating GPT-4.

    “Silicon underpins every customer workload, making it a critical area of innovation for AWS,” AWS compute and networking VP David Brown said in a press release. “[W]ith the surge of interest in generative AI, Tranium2 will help customers train their ML models faster, at a lower cost, and with better energy efficiency.”

    Amazon didn’t say when Trainium2 instances will become available to AWS customers, save “sometime next year.” Rest assured we’ll keep eyes peeled for more information.

    The second chip Amazon announced this morning, the Arm-based Graviton4, is intended for inferencing. The fourth generation in Amazon’s Graviton chip family (as implied by the “4” appended to “Graviton”), it’s distinct from Amazon’s other inferencing chip, Inferentia.

    Amazon claims Graviton4 provides up to 30% better compute performance, 50% more cores and 75% more memory bandwidth than one previous-generation Graviton processor, Graviton3 (but not the more recent Graviton3E), running on Amazon EC2. In another upgrade from Graviton3, all of Graviton4’s physical hardware interfaces are “encrypted,” Amazon says — ostensibly better securing AI training workloads and data for customers with heightened encryption requirements. (We’ve asked Amazon about what “encrypted” implies, exactly, and we’ll update this piece once we hear back.)

    “Graviton4 marks the fourth generation we’ve delivered in just five years and is the most powerful and energy-efficient chip we have ever built for a broad range of workloads,” Brown continued in a statement. “By focusing our chip designs on real workloads that matter to customers, we’re able to deliver the most advanced cloud infrastructure to them.”

    Graviton4 will be available in Amazon EC2 R8g instances, which are available in preview today with general availability planned in the coming months.

    Read more about AWS re:Invent 2023 on TechCrunch



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here