Add thelocalreport.in As A Trusted Source
Judges around the world are dealing with a growing problem: legal briefs that were prepared with the help of artificial intelligence and presented with errors such as citations to cases that do not exist, according to lawyers and court documents.
This trend serves as a cautionary tale for those who are learning to use AI tools at work. Many employers want to hire employees who can use technology to assist with tasks such as conducting research and preparing reports. As teachers, accountants and marketing professionals begin to engage with AI chatbots and assistants to generate ideas and improve productivity, they are also finding that the programs can make mistakes.
A French Damien Charlotin, a data scientist and lawyer, has listed at least 490 court filings over the past six months that involve “hallucinations,” which are AI responses that contain false or misleading information. He said that as more people are using AI, the pace is accelerating.
“Even a more sophisticated player can have problems with this,” Charlotin said. “AI can be a boon. It’s wonderful, but it also has some disadvantages.”
Charlotten, a senior research fellow at HEC Paris, a business school based out of FranceThe capital city created a database to track cases in which a judge ruled that generative AI produced hallucinatory material such as fabricated case law and false citations. Most of the rulings, he said, are from US cases in which plaintiffs represented themselves without a lawyer. While most judges issued warnings about the errors, some imposed fines.
But high-profile companies have also submitted problematic legal documents. a federal judge in colorado The ruling came after an attorney for MyPillow Inc. filed a brief containing nearly 30 incriminating quotes as part of a defamation case against the company and founder Michael Lindell.
The legal profession is not the only profession struggling with AI vulnerabilities. AI observations that appear at the top of web search results pages often contain errors.
And AI tools also raise privacy concerns. Workers in all industries need to be vigilant about the information they upload or enter into prompts to ensure they are protecting the confidential information of employers and customers.
Legal and workplace experts share their experiences with AI mistakes and describe the dangers to avoid.
Think of AI as an assistant
Don’t rely on AI to make big decisions for you. Some AI users consider the tool as an apprentice to whom you assign tasks and whose completed work you expect to check.
“Think about AI as enhancing your workflow,” said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. That said, it can serve as an assistant for tasks like drafting emails or researching an itinerary, but don’t think of it as a substitute that can do all the work.
While preparing for a meeting, Flynn experimented with an in-house AI tool that suggested discussion questions based on an article she shared with the team.
“Some of the questions it proposed weren’t really the right context for our organization, so I was able to give it some feedback… and it came back with five very thoughtful questions,” she said.
check accuracy
Flynn also found problems with the output of the AI tool, which is still in the pilot phase. He was once asked to compile information about the work done by his organization in different states. But the AI tool was treating completed work and funding proposals as the same thing.
“In that case, our AI tool was not able to identify the difference between something that was proposed and something that was completed,” Flynn said.
Fortunately, he had the institutional knowledge to recognize errors. “If you’re new to an organization, ask coworkers if the results seem accurate to them,” Flynn suggests.
While AI can help with brainstorming, relying on it to provide factual information is risky. Take the time to check the accuracy of what the AI produces, even if it’s tempting to skip that step.
,People “Having to go back and check all the citations, or when I look at a contract that the AI has summarized, I have to go back and read what the contract says, it’s a little inconvenient and time-consuming, but that’s what you have to do.” “As much as you think AI can replace it, it can’t.”
Beware of note takers
It may be tempting to use AI to record and take notes during meetings. Some tools produce useful summaries and outline action steps based on what was said.
But many jurisdictions require participants’ consent before recording a conversation. Before using AI to take notes, stop and consider whether the conversation should be kept privileged and confidential, said Daniel Keyes, a Chicago-based partner at the law firm Fisher Phillips.
He suggested consulting with colleagues in legal or human resources departments before deploying NoteTaker in high-risk situations such as investigations, performance reviews or legal strategy discussions.
“People are claiming that there should be different levels of consent with the use of AI, and this is something that is working its way through the courts,” Cass said. “This is an issue I would say companies should keep an eye on because it is in litigation.”
protecting confidential information
If you’re using a free AI tool to draft a memo or marketing campaign, don’t reveal identifying information or corporate secrets to it. Once you upload that information, it’s possible that other people using the same tool can find it.
That’s because when other people ask questions of the AI tool, it will search available information, including the details you provide, as it creates its answer, Flynn said. “It doesn’t matter whether something is public or private,” he said.
seek schooling
If your employer doesn’t provide AI training, try experimenting with free tools like ChatGPT or Microsoft Copilot. Some universities and tech companies offer classes that can help you understand how AI works and how it can be useful.
A course that teaches people how to create the best AI prompts or practical courses that provide opportunities to practice are valuable, Flynn said.
Despite potential problems with devices, learning how they work can be beneficial in a time when they are ubiquitous.
“The biggest potential pitfall in learning to use AI is not learning how to use it at all,” Flynn said. “We all need to become proficient in AI, and taking early steps to build your familiarity, your literacy, your comfort with the tools will be extremely important.”
,
Share your stories and questions about workplace wellness at cbussewitz@ap.org. Follow AP’s Be Well coverage, focusing on wellness, fitness, diet and mental health at https://apnews.com/hub/be-well.