An Artificial Intelligence Chatbot can help clean your presentation moments quickly before an important board meeting. But those quick AI fixes can become an obligation for higher-ups that you are trying to impress.
More employees are using AI tools to help them complete work functions and increase their productivity, but most of the time, those devices are not approved by companies. When employees use unauthorized AI platforms and equipment, it is referred to as shadow AI, and it creates a risk that workers accidentally can disclose sensitive internal data on these platforms, making the company susceptible to cyber attacks or intellectual property theft.
Often, companies are slow in adopting the latest technology, which can push employees to look for a third-party solution, such as AI assistants said, Karim Sadek, a partner in a consultation practice at KPMG in Canada, said Karim Sadek said that technical risk is expected.
This so -called shadow AI often leaks when users are looking for convenience, speed and intimacy, Sadak said.
But these unauthorized tools are becoming a headache for Canadian businesses, large and small.
Robert Falzone, head of engineering at Cyber Security Firm Czech Point Software Technologies Limited, said, “Companies are struggling to ensure that their intellectual property has been maintained and they are not giving sensitive information about their business practices, about their customers, about their user bases,”
AI users do not understand that whenever they interact with the chatbot, their conversations and data are stored and used to train those devices, Falzon said.
For example, an employee can share confidential financial statements or proprietary research on untouched chatbots to generate infographics – the sales number of this is now available to people outside the company. Meanwhile, an outsider can land on that data while researching the same subject on chatbot, unaware that it was not considered publicly accessible.
“There is a chance that AI can dig back in its resources and training and find a piece of information about your company that talks about the results … and simply provides the person,” Falzon said.
And hackers are using the same device, such as all the rest, Falzone warned.
The July 1 report by the IBM and US-based Cybercity Research Center Ponmon Institute found that 20 percent of the companies they had surveyed said that they had faced data violations due to security incidents related to Chhaya AI. This is seven percent more than those who experienced safety events associated with approved AI devices.
The report said the average cost of a Canadian violation increased by 10.4 percent to $ 6.32 million between March 2024 and February 2025 to $ 6.32 million.
KPMG’s Sadek said that there is a need to establish governance around the use of AI at work.
“It is not necessary that technology fails you; it is a lack of governance,” he said.
This may mean that Sadek said to establish an AI committee with people from departments such as legal and marketing, to look at equipment and encourage adoption with the right railing.
He said that guards should be grounded in an AI framework that aligns with the company’s morality and helps to answer difficult questions about security, data integrity and prejudice.
An example can adopt a zero-trust mentality, Falzon said. This means not to rely on any device or app that is not clearly allowed by the company.
The zero-trust approach reduces the risk and limits what a device will do or will not allow an employee to submit into the chatbot, they explained. For example, Falzon said that employees at the check point are not allowed to input research and development data and if they do so, the system will restrict and inform the user of the risk.
“It is going to help to ensure that customers are educated and understand what they take risks, but at the end of it, make sure those risks are reduced by technology security,” Falzone said.
Creating awareness is important to interact between employers and workers about AI devices, say by experts.
Sadek said that organizing training sessions on hands and educating employees about the risks of using unwanted AI devices.
“It reduces use or justifies users or employees,” he said. “They feel accountable, especially if they are educated and are awareness sessions about risks.”
To keep the data contained within internal systems, some companies have started deploying their own chatbott.
Sadek said that this is a smart way to deal with unauthorized AI tools.
“This (companies) will help ensure more security and privacy of their company’s data, and will ensure that they are built within the railing that are already within their organization,” he said.
Nevertheless, internal equipment cannot completely eliminate cyber security risks.
Researcher Ali Dehgantanha said that it took just 47 minutes to break the information of the internal chatbot and access sensitive client of the Fortune 500 company during his cyber security audit. The company hired her to evaluate her internal chatbot safety and investigated whether the system could be manipulated to disclose sensitive data.
“Due to its nature, it had a large number of access to the company’s internal documents, as well as the access to communication that various partners were operating,” said that Professor and Canada Research President Dehgantanha said Cybercity and Threat Intelligence at the University.
He said that large banks, law firms and supply chain companies are quite relying on advice, email reactions and interior chatbott for internal communication – but many have proper security and lack of testing.
He said that companies would have to set a budget while adopting AI technology or deploying their own internal equipment.
“For AI only, for any technology, always consider the total cost of ownership,” said Dehganha. “A part of that cost of ownership is how to secure and preserve it.
“For AI at this time, this cost is important,” he said.
Companies cannot prevent employees from using AI, Falzone said, so employers need to provide equipment that their employees need.
At the same time, he said, “They want to ensure that things like data leakage are not there and they are not creating more risk than the benefit they provide.”