Skip to content

Artificial intelligence starts creating fake legal cases and enters real courts

By | Published | No Comments

Artificial intelligence starts creating fake legal cases and enters real courts

It is therefore not surprising that AI is also having a significant impact on our legal system. (representative)

We’ve seen deepfake, explicit images celebrity, created by artificial intelligence (AI). AI also plays a role create music, Driverless racing car and spread misinformationamong other things.

It is therefore not surprising that AI is also having a significant impact on our legal system.

As we all know, courts must decide disputes based on the law, which is presented to the court by lawyers as part of their client’s case. Therefore, it is of great concern that fake laws invented by artificial intelligence are used in legal disputes.

This not only raises legality and ethical issues, but may also undermine confidence and trust in the global legal system.

How do fake laws come about?

There is no doubt that generative AI is a powerful tool with the potential to transform society, including many aspects of the legal system. But its use comes with responsibilities and risks.

Lawyers are trained to use their expertise and experience with caution and generally do not take risks.However, some careless lawyers (and self-representation Litigants) have been discovered by artificial intelligence.

AI models are trained on massive data sets. When users are prompted, they can create new content (text and audiovisual content).

While content generated in this way may appear convincing, it may also be inaccurate. This is the result of an AI model trying to “fill in the gaps” when training data is insufficient or flawed, often referred to as “hallucination“.

In some cases, generating AI hallucinations is not a problem. In fact, it can be considered an example of creativity.

But if AI hallucinates or creates inaccurate content that is then used in legal proceedings, that’s a problem — especially when lawyers are under time pressure and many people don’t have access to legal services.

This powerful combination can lead to carelessness and shortcuts in legal research and document preparation, which can create reputational problems for the legal profession and lead to a lack of public trust in the administration of justice.

this has already happened

The most famous “fake case” of generative AI is the one in the United States in 2023 Mata v Avianca, in which attorneys filed a brief with false excerpts and citations from the case to a New York court. This briefing was researched using ChatGPT.

The lawyers were unaware that ChatGPT was hallucinating and failed to verify whether the case actually existed. The consequences are disastrous. Once the errors were discovered, courts dismissed clients’ cases, sanctioned attorneys for bad faith, fined them and their firms, and exposed their actions to public scrutiny.

Despite the negative publicity, other false cases continue to emerge. Donald Trump’s former lawyer Michael Cohen provided his own lawyer case generated by Google Bard, another generative AI chatbot. He believed they were true (they were not) and that his attorney would verify them (he did not).his lawyer include cases In a brief filed in U.S. federal court.

False cases have also appeared in recent cases Canada and U.K..

If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine public trust in the legal system? Lawyers who consistently fail to exercise due care in using these tools risk misleading and confusing the courts, harming their clients and generally undermining the rule of law.

What measures are being taken?

Around the world, legal regulators and courts have responded in various ways.

Bars and courts in several U.S. states have issued guidance, opinions, or orders regarding the use of generative AI, ranging from responsible adoption to outright bans.

The Bar Societies of England and British Columbia and the courts of New Zealand have also developed guidelines.

In Australia, the New South Wales Law Society has a A guide to generative artificial intelligence For barristers.this Law Society of New South Wales and victorian law school Articles regarding responsible use have been published in accordance with the Rules of Attorney Conduct.

Many lawyers and judges, like the public, have some familiarity with generative AI and can recognize its limitations and benefits. But there are others who may not be as aware. Guidance definitely helps.

But a coercive approach is needed. Lawyers using generative AI tools cannot view them as a substitute for their own judgment and diligence and must check the accuracy and reliability of the information they receive.

In Australia, courts should adopt practice notes or rules setting out expectations when using generative AI in litigation. Court Rules can also guide self-represented litigants and convey to the public that our courts are aware of this issue and are addressing it.

The legal profession could also adopt formal guidance to promote responsible use of AI by lawyers. At a minimum, technical proficiency should be a requirement for continuing legal education for Australian lawyers.

Establishing clear requirements for the responsible and ethical use of generative AI by Australian lawyers will encourage appropriate adoption and enhance public confidence in our lawyers, courts and the administration of justice across the country.dialogue

(author:Michael Leggprofessor of law, UNSW Sydney and Vicki McNamaraSenior Fellow, Center for the Future of the Legal Profession, UNSW Sydney)

(Disclosure statement:Vicki McNamara is affiliated with the Law Society of New South Wales (as a member). Michael Legg does not work for, consult, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant relationships beyond his academic appointment)

This article is reproduced from dialogue Licensed under Creative Commons.read Source article.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Pooja Sood, a dynamic blog writer and tech enthusiast, is a trailblazer in the world of Computer Science. Armed with a Bachelor's degree in Computer Science, Pooja's journey seamlessly fuses technical expertise with a passion for creative expression.With a solid foundation in B.Tech, Pooja delves into the intricacies of coding, algorithms, and emerging technologies. Her blogs are a testament to her ability to unravel complex concepts, making them accessible to a diverse audience. Pooja's writing is characterized by a perfect blend of precision and creativity, offering readers a captivating insight into the ever-evolving tech landscape.