Add thelocalreport.in As A
Trusted Source
Clippy, the animated paper clip that bothered Microsoft Office users, nearly three decades ago, were probably ahead of their time.
Microsoft on Thursday introduced a new artificial intelligence character named Mico (pronounced MEE’koh), a blob-shaped cartoon face that will symbolize the software giant’s CoPilot virtual assistant and marks the latest effort by tech companies to imbue their AI chatbots with more personality.
Copilot’s cute new emoji-like exterior comes as AI developers face a crossroads over how to present their increasingly capable chatbots to consumers without causing harm or backlash. Some have opted for faceless icons, others are selling flirtatious, human-like avatars and Microsoft is looking for a middle ground that’s friendly without being obtrusive.
“When you talk about something sad, you can see the miko’s face change. You can see it dancing and moving around as it cheers with you,” said Jacob Andreu, corporate vice president of product and development for Microsoft AI, in an interview with The Associated Press. “It really takes an effort to land this AI companion that you can actually feel.”
So far only in the US, Co-Pilot users on laptop and phone apps can talk to Mico, which changes colors and wears glasses when in “study” mode. It’s also easy to turn off, which is a big difference from Microsoft’s Clipit, better known as Clippy and which was notorious for its persistence in advising on the word processing tool when it first appeared on desktop screens in 1997.
“It just didn’t fit user needs at the time,” said Brian Reimer, a research scientist at the Massachusetts Institute of Technology. “Microsoft pushed it, we protested it and they got rid of it. I think today we’re much more prepared for this kind of thing.”
Reimer, co-author of a new book called “How to Make AI Useful,” said AI developers are balancing how much personality to give AI assistants based on their expected users.
Tech-savvy adopters of an advanced AI coding tool probably want it to “work more like a machine because in the end they know it is a machine,” Reimer said. “But individuals who are not as trusting of a machine are going to be best supported – not replaced – by technology that feels like a human being.”
Microsoft, a provider of work productivity tools that is much less dependent on digital advertising revenue than big tech Competitors have less incentive to over-engage their AI companions in a way that has been linked to social isolation, harmful misinformation and, in some cases, suicides.
Andreu said Microsoft has noticed that some AI developers have moved away from “giving AI any kind of embodiment,” while others are moving in the opposite direction in enabling AI girlfriends.
“Those two paths don’t really mesh with us that much,” he said.
Andreu said the companion’s design is “really useful” and not so validating that it “tells us exactly what we want to hear, confirms biases we already have, or even sucks you in from a time-spent perspective and tries to monopolize and deepen the session and increase the time you’re spending with these systems.”
“Being flattered — short-term, maybe — makes the user respond more favorably,” Andreu said. “But in the long term, it’s not really going to move that person closer to their goals.”
Part of Microsoft’s announcements on Thursday included the ability to invite Copilot to group chats, an idea similar to how AI has been integrated into social media platforms like Snapchat, where Andreu worked, or Meta’s WhatsApp and Instagram. But Andreu said those interactions often involve bringing up AI as a joke to “troll your friends,” which is different from the “highly collaborative” AI-assisted workplace that Microsoft has in mind.
As part of its long-standing competition with Google and other tech companies to supply its technology into classrooms, Microsoft’s audience also includes children. Microsoft also said Thursday that it has added a feature to turn CoPilot into a “voice-enabled, Socratic tutor” that guides students through concepts they’re studying in school.
A growing number of kids use AI chatbots for everything – from homework help to personal advice, emotional support, and everyday decision making.
federal trade commission Last month an investigation was launched into several social media and AI companies – Microsoft was not one of them – regarding potential harm to children and teens who use their AI chatbots as companions.
It comes after some chatbots were shown giving dangerous advice to children on topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed herself after developing an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against the character. Ai. And the parents of a 16-year-old boy filed a lawsuit OpenAI And its CEO Sam Altman alleged in August chatgpt Trained the California boy to plan and take his own life.
While Altman recently promised “a new version of ChatGPT” coming this fall that restores some of the personality of previous versions, he said the company temporarily paused it because “we were being careful about mental health issues” that he suggested have now been fixed.
“If you want your ChatGPT to respond in a very human way, or use lots of emojis, or act like a friend, ChatGPT should do it,” Altman said on X.