Skip to content

Apple researchers are working on AI models that could make Siri better

By | Published | No Comments

Apple researchers have published a new paper on an artificial intelligence (AI) model that they claim can understand contextual language. The research paper, which has not yet been peer-reviewed, also states that large language models (LLMs) can run entirely on-device without consuming large amounts of computing power. Judging from the description of the AI ​​model, it seems to be well suited for the role of a smartphone assistant and could upgrade the tech giant’s native voice assistant Siri. Last month, Apple published another paper on a multimodal AI model called MM1.

this Research Papers It is currently in preprint stage and posted on arXiv, an open access online repository of scholarly papers. The AI ​​model is named ReALM, which is the abbreviation of Reference Resolution As Language Model. The paper emphasizes that the main focus of the model is to perform and complete tasks using contextual language cues, which is more common in the way humans speak. For example, it will be able to understand when a user says “take me to the penultimate one,” according to the paper.

ReALM is designed to perform tasks on smart devices. These tasks are divided into three parts – on-screen entities, session entities and background entities. According to the examples shared in this article, screen entities refer to tasks that appear on the device screen, session entities are tasks based on user requests, and background entities refer to tasks that occur in the background, such as a song playing on an app.

What’s interesting about this AI model, the paper claims, is that despite taking on the complex task of understanding, processing, and executing actions suggested through contextual cues, it doesn’t require a huge amount of computational energy, “making RealLM ideal for practical applications. “. A reference resolution system that can exist on the device without impacting performance. ” It achieves this by using far fewer parameters than major LL.M.s such as GPT-3.5 and GPT-4.

The paper also claims that despite working in such a restricted environment, the AI ​​model still shows “significantly” better performance than OpenAI’s GPT-3.5 and GPT-4. The paper further elaborates that while the model scores better than GPT-3.5 on plain text benchmarks, it outperforms GPT-4 on domain-specific user utterances.

Although the paper is promising, it has not yet been peer-reviewed, so its validity remains uncertain. But if the paper gets positive reviews, it could push Apple to commercially develop the model and even use it to make Siri smarter.


Affiliate links may be automatically generated – see our Ethics Statement for details.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Surja, a dedicated blog writer and explorer of diverse topics, holds a Bachelor's degree in Science. Her writing journey unfolds as a fascinating exploration of knowledge and creativity.With a background in B.Sc, Surja brings a unique perspective to the world of blogging. Hers articles delve into a wide array of subjects, showcasing her versatility and passion for learning. Whether she's decoding scientific phenomena or sharing insights from her explorations, Surja's blogs reflect a commitment to making complex ideas accessible.