Add thelocalreport.in As A Trusted Source
A federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports in a huge court opinion in a two-sentence footnote, raising concerns that it could lead to inaccuracies and further erode public trust in how police handle immigration actions. chicago area and upcoming protests.
U.S. District Judge Sarah Ellis wrote a footnote in a 223-page opinion issued last week, saying the practice of using chatgpt Writing use-of-force reports diminishes the credibility of agents and “may make the inaccuracy of these reports apparent.” He described what he saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images.
The judge noted factual discrepancies between the official statement about those law enforcement responses and what the body camera footage showed. But experts say using AI to write a report that relies on an officer’s specific perspective without using his or her actual experience is the worst possible use of technology and raises serious concerns about accuracy and privacy.
Required attitude of an officer
Law enforcement agencies across the country are grappling with how to create guardrails that allow officers to use increasingly available AI technology while maintaining accuracy, privacy, and professionalism. experts That said, the example given in the opinion does not meet that challenge.
“What this guy did is the worst of all worlds. To give it a sentence and a few photos — if it’s true, if this is what happened here — it goes against every advice we’ve given. It’s a nightmare scenario,” said Ian Adams, an assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council on Criminal Justice, a non-partisan think tank.
Department of Homeland Security did not respond to requests for comment, and it was not clear whether the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released.
Adams said some departments have created policies, but they often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, particularly use of force. the courts When considering whether the use of force was justified, it has established a standard referred to as objective reasonableness, relying heavily on the perspective of the specific officer in that specific scenario.
“We need the specific clear events of that incident and the specific thoughts of that specific officer to tell us whether it was an appropriate use of force,” Adams said. “This is obviously the worst case scenario other than being asked to make up facts, because you are begging to make up facts in this high-risk situation.”
Personal information and evidence
In addition to raising concerns about AI-generated reports misrepresenting what happened, the use of AI also raises potential privacy concerns.
Katie Kinsey, chief of staff and technical policy advisor at the Policing Project at NYU School of Law, said that if the agent in command was using the public ChatGPT version, he probably did not understand that he lost control over the images as soon as he uploaded them, making them part of the public domain and potentially able to be used by bad actors.
Kinsey said that from a technology standpoint most departments are building the plane as it is flying it when it comes to AI. He said it is often a pattern in law enforcement to wait until new technologies are used and in some cases mistakes are made, then look to implement guidelines or policies.
“You would prefer to do things another way, where you understand the risks and develop guardrails around the risks,” Kinsey said. “Even if they’re not studying best practices, there are still some bottom-up results that can help. We can start with transparency.”
Kinsey said that while federal law enforcement considers how the technology should or should not be used, it could adopt a policy recently implemented in Utah or California, where police reports or communications written using AI would have to be labeled.
Careful use of new equipment
The photographs the officer used to frame the story also raised accuracy concerns for some experts.
Well-known tech companies like Axon have started offering AI components with their body cameras to assist in writing incident reports. AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to generate narratives because companies have said programs that attempt to use visuals are not effective enough for use.
Andrew Guthrie Ferguson, a law professor at George Washington University Law School, said, “There are many different ways to describe color, facial expression or any visual component. You can ask any AI expert and they will tell you that signals produce very different results between different AI applications, and it gets complicated with the visual component.”
“There’s also the question of professionalism. Do we agree with police officers using predictive analytics?” He added. “It’s about what the model thinks should have happened, but maybe that actually didn’t happen. You don’t want this to be the one who has to go to court and justify their actions.”