Facebook parent company Meta Platforms has unveiled a new artificial intelligence system that powers what CEO Mark Zuckerberg calls “the smartest AI assistant you have at your disposal.”

But as Zuckerberg’s team of meta-AI agents have begun venturing into social media to interact with real people in recent days, their bizarre exchanges have exposed the limitations of even the best generative AI technology.

One of them joined a Facebook group for moms to talk about their gifted children. Another tried to give away non-existent items to confused members of the Buy Nothing forum.

Meta, along with leading AI developers Google and OpenAI, as well as startups such as Anthropic, Cohere and France’s Mistral, has been developing new AI language models and hopes to convince customers they have the smartest, most convenient or most efficient Chatbot.

While Meta is retaining its most powerful AI model, Llama 3, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it is now integrated into Meta AI assistant features for Facebook, Instagram and WhatsApp middle.

AI language models are trained on vast amounts of data to help them predict the most logical next word in a sentence, and new versions are often smarter and more powerful than their predecessors. Meta’s latest models are built with 8 billion and 70 billion parameters – a measure of the amount of data a system is trained on. A larger model with approximately 400 billion parameters is still being trained.

See also  Zee to reduce headcount at Bengaluru technology and innovation hub by about half

Nick Clegg, Meta’s global president, said: “Frankly, the vast majority of consumers don’t know much about or care much about the underlying base model, but the way they experience it is like a more useful , more interesting and more functional artificial intelligence assistants,” Shi said in an interview.

“A bit stiff”

Meta’s AI agents are relaxing, he added. He said some people found early Llama 2 models (released less than a year ago) to be “a bit stiff and sanctimonious at times, as there was no response to prompts and questions that were often completely harmless or innocent”.

But in letting their guard down, Meta’s AI agents have also been caught impersonating humans with fictional life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming it, too, has a child in a New York City school district. The company later apologized to group members and the comments disappeared, according to a series of screenshots shown by The Associated Press.

“Sorry for the mistake! I’m just a large language model, I have no experience or kids,” the chatbot told the group.

FILE - Meta CEO Mark Zuckerberg speaks during the tech giant's Connect developer conference on September 27, 2023 in Menlo Park, California.  Meta launched a new artificial intelligence system on April 18, 2024.

FILE – Meta CEO Mark Zuckerberg speaks during the tech giant’s Connect developer conference on September 27, 2023 in Menlo Park, California. Meta launched a new artificial intelligence system on April 18, 2024.

One panelist, who happens to study artificial intelligence, said it was clear that the agents didn’t know how to distinguish between useful responses and those generated by AI rather than humans that would be viewed as insensitive, disrespectful, or pointless.

Aleksandra Korolova, an assistant professor of computer science at Princeton University, said: “The help of artificial intelligence assistants is unreliable and can be very harmful, which will cause harm to the individuals who use it. A big burden.”

Clegg said Wednesday he was unaware of the exchange. Facebook’s online help page says a Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.” The administrator of the group can turn it off.

Need a camera?

In another example shown to The Associated Press on Thursday, the agent caused chaos on a forum near Boston where unwanted items were exchanged. Just an hour after a Facebook user posted a message looking for certain items, an AI agent offered up a “used” Canon camera and an “almost new portable air conditioner that I’ve never used.”

“This is a new technology and it may not always return the response we want, as is true for all generative AI systems,” Meta said in a written statement Thursday. The company Indicates ongoing efforts to improve these features.

According to data from Stanford University, ChatGPT sparked a craze for artificial intelligence technology that generates human-like text, images, code and sounds, and the technology industry and academia launched 149 large-scale artificial intelligence systems trained on massive data sets. The number was more than double the previous year. University Survey.

They may eventually reach their limits, at least when it comes to data, said Nestor Maslej, research manager at Stanford University’s Institute for Human-Centered Artificial Intelligence.

“I think it’s clear that if you scale models based on more data, they get better and better,” he said. “But at the same time, these systems have been trained on a percentage of all the data that’s ever existed on the Internet.”

More data—available and ingested at a cost only the tech giants can afford, and increasingly subject to copyright disputes and litigation—will continue to drive improvements. “But they still can’t plan,” Maslay said. “They still hallucinate. They still make errors in their reasoning.”

Achieving AI systems that can perform higher-level cognitive tasks and common-sense reasoning (areas where humans still excel) may require a shift beyond just building larger models.

see what works

For the large number of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. In particular, language models have been used to power customer service chatbots, write reports and financial insights, and summarize long documents.

“You’re going to see companies looking for fit, testing each different model to see if it fits what they want to do, and finding some that are better in some areas than others,” said Todd Lohr. Models that perform better.” KPMG.

Unlike other model developers, which sell AI services to other businesses, Meta primarily designs AI products for consumers (those who use its ad-fueled social network). Joelle Pineau, Meta’s vice president of artificial intelligence research, said at a recent event in London that the company’s goal is to make Llama-powered Meta AI “the most useful assistant in the world.”

“In many ways, the models we have today will be a piece of cake compared to the models we have five years from now,” she said.

But she said the “question on the table” is whether researchers can fine-tune their larger Llama 3 model so that it is safe to use and does not cause hallucinations or hate speech. Meta has so far advocated a more open approach, publicly releasing key components of its artificial intelligence systems for others to use, compared with the proprietary systems pioneered by Google and OpenAI.

“This is not just a technical issue,” Pino said. “This is a social question. What behavior do we want from these models? How do we shape it? If we continue to evolve our models to become more pervasive and powerful without socializing them appropriately, we will We have a big problem.”

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Follow Us on