Skip to content

Meta launches new AI chip to build generative AI products and services

By | Published | No Comments

Meta on Wednesday launched its next-generation Meta-Training and Inference Accelerator (MTIA), a family of custom chipsets targeted at artificial intelligence (AI) workloads. The upgrade to its artificial intelligence chipset comes nearly a year after the company launched its first artificial intelligence chip. These inference accelerators will power artificial intelligence in the tech giant’s existing and future products, services, and social media platforms. Meta specifically emphasized that the chipset’s capabilities will be used to serve its ranking and recommendation models.

Make announcements through blog postMeta said, “Meta’s next-generation large-scale infrastructure is being built on artificial intelligence, including support for new generative artificial intelligence (GenAI) products and services, recommendation systems, and advanced artificial intelligence research. With support for artificial intelligence The computational demands of models increase with model complexity, and we expect this investment to grow in the coming years.”

According to Meta, the new AI chip has significant improvements in power generation and efficiency due to improvements in its architecture. The next generation of MTIA has twice the computing and memory bandwidth compared to the previous generation. It also feeds Meta’s recommendation model, which Meta uses to serve personalized content to users on social media platforms.

In terms of the hardware of the chipset, Meta said that the system uses a rack-based design that can accommodate up to 72 accelerators, with three chassis containing 12 boards and each chassis housing two accelerators. The processor is clocked at 1.35GHz, which is much faster than its predecessor’s 800MHz. It can also run at a higher output of 90W. The fabric between the accelerator and host has also been upgraded to PCIe Gen5.

The software stack is where the company has made significant improvements. The chipset is designed to be fully integrated with PyTorch 2.0 and related features. “MTIA’s lower-level compiler takes the output of the front end and generates efficient and device-specific code,” the company explains.

Results so far show that this MTIA chip can handle the low-complexity (LC) and high-complexity (HC) ranking and recommendation models that are components of Meta’s products. In these models, there can be a ~10x to 100x difference in model size and computational effort per input sample. Because we control the entire stack, we can achieve higher efficiencies compared to commodity GPUs. Achieving these gains requires ongoing effort, and we will continue to improve performance per watt as we build and deploy MTIA chips in systems.

With the rise of artificial intelligence, many technology companies are now focusing on manufacturing customized AI chipsets to meet their specific needs. These processors provide powerful computing power through servers, allowing them to deliver products such as general-purpose AI chatbots and AI tools for specific tasks.


Affiliate links may be automatically generated – see our Ethics Statement for details.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Surja, a dedicated blog writer and explorer of diverse topics, holds a Bachelor's degree in Science. Her writing journey unfolds as a fascinating exploration of knowledge and creativity.With a background in B.Sc, Surja brings a unique perspective to the world of blogging. Hers articles delve into a wide array of subjects, showcasing her versatility and passion for learning. Whether she's decoding scientific phenomena or sharing insights from her explorations, Surja's blogs reflect a commitment to making complex ideas accessible.