Add thelocalreport.in As A
Trusted Source
One of the most senior judges of the country has revealed this Artificial intelligence (AI) is being used in the legal field For “every purpose under the sun”, as he assesses the power of technology to influence decision making in court.
Master of the Rolls Sir Geoffrey Vos, the second most senior judge in Britain, said in his address at the Legal Geek conference that AI is like a “chainsaw”, in that it is useful in the right hands, but “Extremely dangerous” among the wrong people.
Sir Geoffrey said the technology could and should be used to draft contracts and research legal questions, particularly noting how the summarization capabilities of large language models (LLMs) can “save time and drudgery” when used carefully.
turning towards possibility of ai While delivering the judicial decision, the senior judge acknowledged that technology in its present form can solve a case in just two minutes which would take two years of human effort. However, he said the legal sector should be “alarmed” if this happens in practice.

This is because a judge‘The decision is a last resort and is usually irreversible,’ he explains. aye technology Being unable to ever imitate human “emotion, uniqueness, empathy and insight”.
Machine learning also “arises from the state of intelligence at a given time”, says Sir Geoffrey, meaning its long-term use may become unsuitable sooner than human thought has evolved.
The growing use of AI in the legal sector came under scrutiny earlier this year when H.high court told the senior Advocate Take immediate action to ensure that technology is not misused.
The intervention came in June after it was found that two legal cases had been sabotaged by misuse of AI, allowing dozens of fake case-law citations to be placed before the courts.
In the £89 million damages case against Qatar National Bank previously the respective claimants had provided 45 case-law citations, 18 of which were fictitious with multiple bogus citations. The claimant admitted that publicly available AI tools were used, while his lawyer admitted that his client did not check the research he had done.
In another case involving a regulatory decision, a lawyer at Haringey Law Center cited non-existent case law five times. The student barrister denied using AI, but said she may have done so unknowingly by relying on AI summaries offered by Google or Safari.
In her ruling, Dame Victoria Sharp, President of the King’s Bench Division, said, “If artificial intelligence is misused it will have a serious impact on the administration of justice and public confidence in the justice system” and that lawyers caught doing so could face public warnings or even contempt of court proceedings and referral to the police.
He wrote, “Such devices may produce apparently coherent and plausible responses to signals, but those coherent and plausible responses may be completely wrong.”
Barrister Tahir Khan, who specializes in civil litigation, is an expert on the use of AI in the legal sector. He says most such mistakes arise when lawyers use publicly available AI tools like ChatGPT rather than tools specialized for the legal sector, such as those offered by legal intelligence company LexisNexis.
But “if you’re using a tool that’s primarily AI, you still have to check,” he says. “You can’t absolve yourself by saying ‘it’s the equipment’s fault’… the responsibility is on you.”