Meta launched 4: All about the latest open-source AI model about all

In a strategic push to establish its foot in the generative AI space, Meta on Saturday released models before their latest open-source AI suite, Lama 4.

In an Instagram video, Meta CEO Mark Zuckerberg shared the company’s bold AI ambitions: “Our goal is to build the world’s leading AI, it is an open source, and universally accessible it … I have said for some time that the open-source AI will lead, and with Lama 4, we are starting to happen.”

Two early models- Lama 4 Scouts and Lama 4 Maverick-Ab Lama are available for download through the website and Hugging Face. These models also serve as the foundation of Meta AI, now the virtual assistants of the company are unified on WhatsApp, Instagram, Messenger and Web.

Additionally, Meta introduced the Lama 4Bmoth, described it as one of the largest big language models (LLM) ever, and the most powerful version they have developed. This is to help direct the train and future models.

It also marks the first use of the rollout meta mixture-experts (MOE) framework. The Moe divides the model into special components-the letter focuses on areas such as physics, poetry, biology or programming. During any task, only the most relevant experts are active modules, increase efficiency and reduce costs for training and estimates.

Model highlights

The Lama is built with 4 scouts 17 billion parameters and 16 experts, offering a 10 million-token reference window. Designed to work on single GPU, it reflects a tendency towards Google’s recent Gemma 3, a tendency towards high-performance model-similer.

ALSO READ  Elon Musk, Marco Rubio Clash In Trump Cabinet Meeting Over Staff Cuts: Report

LLAMA 4 Maverick, also on 17 billion parameters but with 128 experts, is a common-purpose model for a wide range of cases of support-style. Meta describes it as a reliable “workheors” that is capable of handling chat, logic and other digital functions.

Meta Claims that Maverals flashs GPT -4O of OpenaiI and Gemini 2.0 of Google in several benchmarks, including code generations, regions, multilingual understanding, image analysis and handling long references. It rival a very large model Deepsek V3.1 in the performance of coding and logic-based tasks.

Earlier this year, Deepsek claimed that its model may match those with American firms that raise concerns over increasing global competition in AI. However, officials in Meta and Google have reduced its effects.

What’s next: Bhamoth and beyond it

Still in development, the LLAma 4 behemoth model features 288 billion active parameters, 16 experts and about 2 trillion total parameters. Meta says it performs better in stem-related tests GPT-4.5, Cloud Sonnet 3.7, and Gemini 2.0 Pro.

Zuckerberg also teased the argument of Lama 4, a model that was particularly focused on complex problem-solution and analytical functions. More details are expected in the coming weeks.

Meta wrote in a blog post, “This is just the beginning of the Lama 4 lineup.” “We believe that the most advanced AI system should be able to take generalized action, engage in natural interactions and deal with problems that they have never encountered before.”

scaling up

Meta revealed that the Lama model’s downloads in December 2024 overtook a billion just two weeks before launch-launch from 650 million.

ALSO READ  US Lawmakers Join Trump-Musk Government Shakeup

Back in January, Zuckerberg announced the company’s AI infrastructure expenditure for 2025, in which the expansion of the meta will be between $ 60 billion and $ 65 billion, covering the investment in server, data centers and other resources required to support AI efforts.


Join WhatsApp

Join Now