DENVER — Artificial intelligence is helping decide which Americans get job interviews, apartments and even health care, but the first major proposal to rein in bias in AI decision-making is facing resistance from all sides.

Lawmakers in states including Colorado, Connecticut and Texas gathered Thursday to debate their proposals, as civil rights groups and industry engaged in a tug-of-war with the centerpiece of the legislation.

Organizations such as unions and consumer rights groups are demanding more transparency from companies and more legal recourse for citizens to sue for AI discrimination. The industry is providing temporary support but holding firm on these accountability measures.

Caught in the middle, lawmakers from both parties — including those from Alaska, Georgia and Virginia — have been working together on AI legislation amid inaction from the federal government. The purpose of the press conference was to highlight the work between states and stakeholders, emphasizing the importance of collaboration and compromise in the first steps of regulation.

Lawmakers include Sen. James Maroney, D-Conn., Senate Majority Leader Robert Rodriguez, D-Colorado, and Sen. Shelley Hughes, R-Alaska.

“Right now, we don’t have confidence that the federal government will pass anything quickly. We do see the need for regulation,” Maroney said. “It is important that industry advocates, government and academia work together to develop the best possible regulation and legislation.”

Lawmakers see these bills as possible first steps toward the future.

Although more than 400 AI-related bills are being debated in statehouses across the country this year, most target an industry or just a technology, such as deepfakes used in elections or the production of pornographic images.

See also  Meta proposes to almost halve EU ad-free Facebook, Instagram monthly fees

The largest bill introduced by this group of lawmakers provides a broad oversight framework, specifically surrounding one of the technology’s most perverse dilemmas: AI discrimination. Examples include AI that failed to accurately assess Black patients and AI that downgraded women’s resumes when filtering job applications.

Still, the Equal Employment Opportunity Commission estimates that up to 83% of employers use algorithms to help with hiring.

If nothing is done, these AI systems will almost always be biased, explains Suresh Venkatasubramanian, a professor of computer and data science at Brown University who is teaching a course on mitigating bias in the design of these algorithms.

“You have to do something explicit to not be biased from the outset,” he said.

The proposals, mostly in Colorado and Connecticut, are complex, but at their core they require companies to conduct “impact assessments” of certain artificial intelligence systems. The reports will include how AI affects decision-making, data collected and analysis of risks of discrimination, as well as explanations of company safeguards.

The dispute centers on who can see the reports. Greater access to information about AI systems, such as impact assessments, means greater public responsibility and safety. But companies worry it will also increase the risk of lawsuits and trade secret disclosures.

Under bills in Colorado, Connecticut and California, the company does not have to submit regular impact assessments to the government. Companies also have a duty to disclose information to the attorney general if discrimination is found—the government or independent organizations will not test these AI systems for bias.

Unions and academics worry that overreliance on companies’ self-reporting could jeopardize the public or government’s ability to detect AI discrimination before it causes harm.

“It’s already difficult when you have these big companies with billions of dollars,” said Kjersten Forseth, who represents the Colorado AFL-CIO, a federation of labor unions that opposed the Colorado bill. “Essentially, you’re giving them extra guidance to suppress workers or consumers.”

Tech companies say greater transparency will expose trade secrets in what is now an ultra-competitive market. David Edmonson of TechNet, a bipartisan network of technology CEOs and senior executives lobbying for the AI ​​bill, said in a statement that the group worked with Lawmakers work together to “ensure any legislation addresses the risks of AI while allowing innovation to flourish.”

The California Chamber of Commerce opposed the state’s bill, fearing the impact assessment could become public in a lawsuit.

Another controversial part of the bill is who can bring lawsuits under the legislation, which generally limits state attorneys general and other public attorneys, not citizens.

Workday, a financial and human resources software company, backed the proposal after a provision in the California bill that would have allowed citizens to sue was eliminated. Workday believes citizens’ civil lawsuits will be decided by judges, many of whom are not technical experts, and could lead to inconsistent regulatory approaches.

“We can’t stop artificial intelligence from being integrated into our daily lives, so obviously the government has to step in at some point, but it also makes sense that the industry itself wants a good environment for development,” said Chandler Morse, vice president of U.S. Morse) said. Public policy and corporate affairs at Workday.

See also  Elon Musk’s X bans over 200,000 accounts in India for promoting explicit content

Sorelle Friedler, a professor at Haverford College who studies bias in artificial intelligence, countered.

“American society often defends our rights through prosecution,” Friedler said.

Connecticut Sen. Maroney said some articles claimed he and Rep. Giovanni Capriglione, R-Texas, had been “trampling on bills crafted by the industry,” despite all the money the industry spent lobbying Opposition to the legislation was met with opposition.

Maroney noted that an industry group, the Consumer Technology Association, has taken out ads and set up a website urging lawmakers to defeat the legislation.

“I believe we are on the right path. We are working with people from industry, academia and civil society,” he said.

“Everyone wants to feel safe and we are developing regulations to enable safe and trustworthy AI,” he added.

_____

Associated Press writers Trân Nguyễn contributed from Sacramento, California, Becky Bohrer contributed from Anchorage, Alaska, and Susan Haigh contributed from Connecticut.

Bedahn is a corps member for the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

This article was generated from automated news agency feeds without modifications to the text.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in