Qualcomm demonstrates on-device artificial intelligence capabilities for smartphones at MWC 2024

Surja
By Surja
4 Min Read

Qualcomm has demonstrated a range of new generative artificial intelligence (AI) capabilities for Android smartphones at the Mobile World Congress (MWC) 2024 event. These features will be powered by Snapdragon and Qualcomm platforms and fully built into the device. In addition to launching specialized large language models (LLMs) and image generation tools for multimodal responses, the company has added more than 75 artificial intelligence models that developers can use to build specific applications.

in a postalAfterwards, Qualcomm announced all the AI ​​features it revealed at MWC. One highlight is that unlike most modern AI models such as ChatGPT, Gemini and Copilot, which process information on the server, Qualcomm’s AI model is completely localized within the device. In addition to minimizing privacy and reliability-related issues, on-device features and applications made using these models can be personalized for the user. To this end, the chipmaker has made more than 75 open source AI models available to developers through Qualcomm AI Hub, GitHub and Hugging Face, including Whisper, ControlNet, Stable Diffusion and Baichuan 7B.

The company says these AI models will also require less computing power and will cost less to build applications because they are optimized for its platform. However, the small size and design of all 75 models for specific tasks was also a contributing factor. So while users won’t see a one-stop chatbot, they will provide ample use cases for niche tasks like image editing or transcription.

To speed up the process of developing applications using models, Qualcomm has added several automated processes to its artificial intelligence library. “The AI ​​Model Library automatically handles model translation from source frameworks to popular runtimes and works directly with the Qualcomm AI Engine direct SDK, then applies hardware-aware optimizations,” it said.

In addition to the small AI model, the US semiconductor company also launched an LLM tool. These are currently in the research phase and were only demonstrated at the MWC event. The first is the Large Language and Visual Assistant (LLaVA), a multimodal LLM with over 7 billion parameters. Qualcomm says it can accept multiple types of data input, including text and images, and hold multiple conversations about the images with an AI assistant.

Another tool demonstrated is called low-rank adaptation (LoRA). Demonstrated on an Android smartphone, it can generate AI-driven images using stable diffusion. It’s not an LLM per se, but it can reduce the number of trainable parameters for AI models, making them more efficient and scalable. In addition to being used for image generation, Qualcomm claims it can also be used to customize artificial intelligence models to create customized personal assistants, improved language translation, and more.


Affiliate links may be automatically generated – see our Ethics Statement for details.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Share This Article
By Surja
Surja, a dedicated blog writer and explorer of diverse topics, holds a Bachelor's degree in Science. Her writing journey unfolds as a fascinating exploration of knowledge and creativity.With a background in B.Sc, Surja brings a unique perspective to the world of blogging. Hers articles delve into a wide array of subjects, showcasing her versatility and passion for learning. Whether she's decoding scientific phenomena or sharing insights from her explorations, Surja's blogs reflect a commitment to making complex ideas accessible.