CAMBRIDGE, Mass. — Facebook parent company Meta Platforms on Thursday unveiled a new artificial intelligence system that CEO Mark Zuckerberg said is “at your disposal.” Powered by the smartest artificial intelligence assistant”.

But as Zuckerberg’s team of meta-AI agents begin venturing into social media this week to interact with real people, their bizarre exchanges are exposing the limitations of even the best generative AI technology.

One of them joined a Facebook group for moms to talk about their gifted children. Another tried to give away non-existent items to confused members of the Buy Nothing forum.

Meta, along with leading AI developers Google and OpenAI, as well as startups such as Anthropic, Cohere and France’s Mistral, has been developing new AI language models and hopes to convince customers they have the smartest, most convenient or most efficient chats robot.

While Meta is retaining its most powerful AI model, Llama 3, on Thursday it publicly released two smaller versions of the same Llama 3 system and said the system is now integrated into Meta AI assistant features for Facebook, Instagram and WhatsApp middle.

AI language models are trained on vast amounts of data to help them predict the most logical next word in a sentence, and new versions are often smarter and more powerful than their predecessors. Meta’s latest models are built with 8 billion and 70 billion parameters – a measure of the amount of data a system is trained on. A larger model with approximately 400 billion parameters is still being trained.

Nick Clegg, president of Meta, said: “Frankly, the vast majority of consumers don’t know much about or care much about the underlying base model, but they experience it as if it were a more useful, A more interesting and versatile AI assistant.” Global Affairs, interview.

See also  Apple Vision Pro production is down because not many people are buying this expensive headset

Meta’s AI agents are relaxing, he added. He said some people found early Llama 2 models (released less than a year ago) to be “a bit stiff and sanctimonious at times, as there was no response to prompts and questions that were often completely harmless or innocent”.

But in an effort to let its guard down, Meta’s artificial intelligence agents were also caught this week impersonating humans with fictional life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming it, too, has a child in a New York City school district. The company later apologized to group members and the comments disappeared, according to a series of screenshots shown by The Associated Press.

“Sorry for the mistake! I’m just a large language model, I have no experience and no children.” The chatbot told everyone.

One panelist, who happens to study artificial intelligence, said it was clear that the agents didn’t know how to distinguish between useful responses and those generated by AI rather than humans that would be viewed as insensitive, disrespectful, or pointless.

Aleksandra Korolova, an assistant professor of computer science at Princeton University, said: “The help of artificial intelligence assistants is unreliable and can be very harmful, which will cause harm to the individuals who use it. A big burden.”

Clegg said Wednesday he was unaware of the exchange. Facebook’s online help page says a Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.” The administrator of the group can turn it off.

In another example shown Thursday, the agent caused chaos on a forum around Boston where unwanted items were exchanged. Just an hour after a Facebook user posted a message looking for certain items, an AI agent offered up a “used” Canon camera and an “almost new portable air conditioner that I’ve never used.”

“This is a new technology and it may not always return the response we want, as is true for all generative AI systems,” Meta said in a written statement Thursday. The company Indicates ongoing efforts to improve these features.

ChatGPT sparked a frenzy for AI technology that generates human-like text, images, code and sounds, with the tech industry and academia launching some 149 large-scale AI systems trained on massive data sets, double the number from the previous year many. Stanford University Survey.

They may eventually reach their limits, at least when it comes to data, said Nestor Maslej, research manager at Stanford University’s Institute for Human-Centered Artificial Intelligence.

“I think it’s clear that if you scale models on more data, they get better and better,” he said. “But at the same time, these systems have been trained on a percentage of all the data that’s ever existed on the web. The Internet.”

More data—available and ingested at a cost only the tech giants can afford, and increasingly subject to copyright disputes and litigation—will continue to drive improvements. “But they still can’t plan,” Maslay said. “They still hallucinate. They still make errors in their reasoning.”

Achieving AI systems that can perform higher-level cognitive tasks and common-sense reasoning (areas where humans still excel) may require a shift beyond just building larger models.

For the large number of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. In particular, language models have been used to power customer service chatbots, write reports and financial insights, and summarize long documents.

“You’re going to see companies looking for fit, testing each different model to see if it’s the right fit for what they want to do,” said Todd Lohr, director of technology consulting at Todd Lohr. things and find some models that perform better in some areas than others,” KPMG.

See also  Google’s next foldable phone will reportedly be called Pixel 9 Pro Fold

Unlike other model developers, which sell AI services to other businesses, Meta primarily designs AI products for consumers (those who use its ad-fueled social network). Joelle Pineau, Meta’s vice president of artificial intelligence research, said at an event in London last week that the company’s goal is to make Llama-powered Meta AI “the most useful assistant in the world.”

“In many ways, the models we have today will be a piece of cake compared to the models we have five years from now,” she said.

But she said the “question on the table” was whether researchers could fine-tune their larger Llama 3 model so that it could be used safely and not cause hallucinations or hate speech. Meta has so far advocated a more open approach, publicly releasing key components of its artificial intelligence systems for others to use, compared with the proprietary systems pioneered by Google and OpenAI.

“This is not just a technical question,” Pino said, “it’s a social question. What behavior do we want from these models? How do we shape it? What if we continue to develop our models to become more general and powerful, and without socializing them properly, we’re going to have a big problem.”

Business writers Kelvin Chan in London and Barbara Ortutay in Oakland, Calif., contributed to this report.

This article was generated from automated news agency feeds without modifications to the text.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in