Canadian researchers make equipment to remove anti-deepfec watermark from AI content

Canadian researchers make equipment to remove anti-deepfec watermark from AI content

Ottawa – Researchers at Waterloo University have created a device that can quickly remove the watermark that identifies artificially generated materials – and they say that it proves that the global effort to deal with Deepfec is the most likely on the wrong track.

PhD candidate Andre Kasis in Computer Science said that academia and industry have focused on watermarking as the best way to fight deepfec and “originally left all other outlooks.”

In a White House incident in 2023, prominent AI companies including OpenIA, Meta, Google and Amazon promise to apply a mechanisms such as watermarking to clearly identify the AI-generated material.

Systems of AI companies embedded a watermark, which is a hidden signature or pattern that is not visible to a person, but can be identified by another system, Kisis explained.

He said that research suggests that the use of watermark is most likely that AI is not a viable shield against the dangers produced by the material.

“It tells us that the risk of deepfack is something that we do not even have to deal at this point,” he said.

The equipment developed at Waterloo University, called unwanted, follows other educational research when the watermark is removed. This includes work at the University of Maryland, a collaboration between the University of California and Carnegie Melon, and Ath works on Zurich.

Kasis stated that her research is ahead of earlier efforts and “first is to highlight a systemic vulnerability that reduces the great base of watermarking as defense against Deepfec.”

In a follow -up email statement, he said that “the unwanted set is that there is no knowledge of the watermarking algorithm for this, not access to internal parameters, and there is no interaction with detector.”

ALSO READ  Springer Homers twice, Blue Jas won 8-5 to sweep Yankis and finished first in AL East

When tested, the device worked more than 50 percent of the time on different AI models, stated in a university press release.

Kasis stated that the AI system can be misused to deep-see, disseminate misinformation and commit crime. Ai-Janit constructed a reliable way to identify the material as a Janit, Kasis said.

AI tools became very advanced for AI detectors to work well, meditation turned on watermarking.

The idea is that if we “cannot understand the facto or find out what is real and what is not,” it is possible that “some types of hidden signatures or some kind of hidden pattern inject”, when the material is made, Kasis said.

The AI Act of the European Union requires providers of systems that exclude large amounts of synthetic materials to apply techniques and methods to make AI-birthted or manipulated content, such as watermarks.

In Canada, a voluntary code of conduct launched by the Federal Government in 2023, to develop and implement those behind the AI system, “a reliable and independently available method to detect the material generated by the system, is needed to focus a close-to-addition on audio-visual materials (eg, watermarking).”

Kasis said that without knowing anything about the unwanted system, the watermark can be removed, which produces it, or anything about the watermark.

He said, “We can simply apply this tool and within a two -minute Max, it will output an image that is visually similar to the watermark image” that can then be distributed, he said.

“This is such an irony that the Arabs are being put into this technique and then, just with two buttons that you press, you can just get an image that is watermark-free.”

ALSO READ  Is it okay to use AI in search of your job? Experts say yes. Here's how it is right

Kasis said that when the leading AI players are running to implement watermarking technology, more efforts should be made to find an alternative solution.

Watermarks have been “declared as a real standard to protect the future against these systems,” he said.

“I think it’s a call to take back a step for all and then try to think about this problem again.”

This report of Canadian Press was first published on 23 July 2025.

Anja Kardglija, Canadian Press

Join WhatsApp

Join Now