710
16 days ago
Your employees are using AI, whether you like it or not - but are they using AI securely?

Summary
Are your employees using AI securely?
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
AI has become inescapable. What was once a niche research field has now become fully integrated into the personal and professional lives of the everyday person. For business leaders, the question therefore evolves from ‘are my employees using AI?’ (they are, whether you like it or not), to ‘are my employees using AI securely?’
Lead Cybersecurity Researcher at CultureAI.
No one can deny the impact that AI has had on the workplace in a relatively short amount of time. In fact, recent research revealed that 83% of UK employees are now regularly using GenAI at work to carry out basic and process-driven tasks like search and summarization.
Evidently, employees and, in turn, employers are seeing the potential of AI usage for productivity gains. An appealing ‘plus’ point for time and resource strapped teams.
Much of this change is being driven from the bottom up. Research suggests that 78% of AI users are already bringing their own AI tools to work. However, these tools are often used within the workplace without company knowledge or oversight.
These undocumented AI tools - or shadow AI - operating on company networks or using company data can pose significant security risks.
What’s clear is that employees will continue to use AI, without waiting for their employers to keep up with permissions, clear guidelines and security measures. Fortunately, there are ways that employers can quickly implement effective AI governance and usage controls without stifling innovation.
Your employees are likely to be using AI in one way or another. The risk is not knowing where or how and with what tools. Research suggests that nearly half (47%) of GenAI users are still accessing tools via personal, unmanaged accounts, either exclusively or alongside company-approved tools.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Unlike traditional software, generative AI relies on data input. With this comes the risk of inputted prompts including confidential information, personal data, IP or even source code. Without visibility, employers face a growing blind spot.
With the adoption of any new technology comes risk. With workplace AI adoption, these include:
Employees are increasingly experimenting with new AI tools at work, often because they are free, faster or more convenient than approved alternatives. While this can improve efficiency, the use of unauthorized AI apps (shadow AI) significantly expands the attack surface and leaves security teams without sufficient visibility or oversight.
What’s concerning is the data being entered into these tools. Recent research suggests that 93% of employees are putting company data into unauthorized AI tools, with nearly a third of those admitting to sharing confidential client information.
This means that intellectual property, regulated information and personal data is potentially being processed by unknown third parties. What they do with that information remains unknown.
Worryingly, many traditional monitoring tools struggle to detect prompt submissions containing sensitive data, particularly when AI tools are accessed via unmanaged accounts or personal devices.
Employees routinely paste sensitive information into AI tools, often without fully understanding the risks. As early as 2023, Samsung engineers accidentally exposed proprietary code and confidential meeting notes by submitting them to ChatGPT, placing the data beyond the company’s control.
Since then, similar incidents have surfaced across industries, revealing how much sensitive information is quietly flowing into third-party AI systems. Once submitted, organizations often have limited ability to fully remove or control how that data is retained or used.
Accounts and prompt breaches
Leaked AI chats or compromised prompts can provide threat actors with access to a wide range of sensitive information and, in some cases, enable further account compromise if broader cybersecurity controls are weak.
Given the volume and sensitivity of data often entered into AI tools, a single compromised AI account can lead to immediate exposure of private company information, including credentials, intellectual property and internal systems.
Compliance and governance gaps
As AI adoption accelerates, regulators are increasingly scrutinizing how organizations use AI, particularly where it intersects with data protection. Submitting personally identifiable information (PII) to uncontrolled or external AI services can breach regulations such as GDPR, HIPAA and sector-specific privacy requirements.
In heavily regulated industries like finance, defense and healthcare, even a single unsanctioned use of an external AI tool can create significant legal and compliance exposure.
Whilst the explosion of AI in the workplace may appear rapid, it mirrors the same ‘bottom up’ adoption of technologies that came before it.
The difference between AI and the uptake of other ‘new’ technologies is that AI consumes corporate data on a massive scale in every interaction, which magnifies its risk of leaks, breaches and compliance issues massively. The solution isn’t blocking AI altogether.
Employees will find work arounds, like using their personal phones to input company data, especially if they’ve already found these tools valuable for efficiency. Security teams need to build out security and IT strategies that baseline AI usage controls, shadow AI discovery and comprehensive usage analytics.
These pillars are essential enablers of responsible innovation.
It may take a while for regulations to catch up, but it’s essential organizations don’t wait. Ultimately, your employees are using AI - whether you like it or not - acting now can help you understand and control the ‘how’ and ‘where’, making AI usage more secure, without stifling innovation.
We've featured the best AI chatbot for business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
CultureAI's Lead Cybersecurity Researcher.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
TechRadar is part of Future plc, an international media group and leading digital publisher. Visit our corporate site.
©
Future Publishing Limited Quay House, The Ambury,
Bath
BA1 1UA. All rights reserved. England and Wales company registration number 2008885.
Please login or signup to comment
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
AI has become inescapable. What was once a niche research field has now become fully integrated into the personal and professional lives of the everyday person. For business leaders, the question therefore evolves from ‘are my employees using AI?’ (they are, whether you like it or not), to ‘are my employees using AI securely?’
Lead Cybersecurity Researcher at CultureAI.
No one can deny the impact that AI has had on the workplace in a relatively short amount of time. In fact, recent research revealed that 83% of UK employees are now regularly using GenAI at work to carry out basic and process-driven tasks like search and summarization.
Evidently, employees and, in turn, employers are seeing the potential of AI usage for productivity gains. An appealing ‘plus’ point for time and resource strapped teams.
Much of this change is being driven from the bottom up. Research suggests that 78% of AI users are already bringing their own AI tools to work. However, these tools are often used within the workplace without company knowledge or oversight.
These undocumented AI tools - or shadow AI - operating on company networks or using company data can pose significant security risks.
What’s clear is that employees will continue to use AI, without waiting for their employers to keep up with permissions, clear guidelines and security measures. Fortunately, there are ways that employers can quickly implement effective AI governance and usage controls without stifling innovation.
Your employees are likely to be using AI in one way or another. The risk is not knowing where or how and with what tools. Research suggests that nearly half (47%) of GenAI users are still accessing tools via personal, unmanaged accounts, either exclusively or alongside company-approved tools.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Unlike traditional software, generative AI relies on data input. With this comes the risk of inputted prompts including confidential information, personal data, IP or even source code. Without visibility, employers face a growing blind spot.
With the adoption of any new technology comes risk. With workplace AI adoption, these include:
Employees are increasingly experimenting with new AI tools at work, often because they are free, faster or more convenient than approved alternatives. While this can improve efficiency, the use of unauthorized AI apps (shadow AI) significantly expands the attack surface and leaves security teams without sufficient visibility or oversight.
What’s concerning is the data being entered into these tools. Recent research suggests that 93% of employees are putting company data into unauthorized AI tools, with nearly a third of those admitting to sharing confidential client information.
This means that intellectual property, regulated information and personal data is potentially being processed by unknown third parties. What they do with that information remains unknown.
Worryingly, many traditional monitoring tools struggle to detect prompt submissions containing sensitive data, particularly when AI tools are accessed via unmanaged accounts or personal devices.
Employees routinely paste sensitive information into AI tools, often without fully understanding the risks. As early as 2023, Samsung engineers accidentally exposed proprietary code and confidential meeting notes by submitting them to ChatGPT, placing the data beyond the company’s control.
Since then, similar incidents have surfaced across industries, revealing how much sensitive information is quietly flowing into third-party AI systems. Once submitted, organizations often have limited ability to fully remove or control how that data is retained or used.
Accounts and prompt breaches
Leaked AI chats or compromised prompts can provide threat actors with access to a wide range of sensitive information and, in some cases, enable further account compromise if broader cybersecurity controls are weak.
Given the volume and sensitivity of data often entered into AI tools, a single compromised AI account can lead to immediate exposure of private company information, including credentials, intellectual property and internal systems.
Compliance and governance gaps
As AI adoption accelerates, regulators are increasingly scrutinizing how organizations use AI, particularly where it intersects with data protection. Submitting personally identifiable information (PII) to uncontrolled or external AI services can breach regulations such as GDPR, HIPAA and sector-specific privacy requirements.
In heavily regulated industries like finance, defense and healthcare, even a single unsanctioned use of an external AI tool can create significant legal and compliance exposure.
Whilst the explosion of AI in the workplace may appear rapid, it mirrors the same ‘bottom up’ adoption of technologies that came before it.
The difference between AI and the uptake of other ‘new’ technologies is that AI consumes corporate data on a massive scale in every interaction, which magnifies its risk of leaks, breaches and compliance issues massively. The solution isn’t blocking AI altogether.
Employees will find work arounds, like using their personal phones to input company data, especially if they’ve already found these tools valuable for efficiency. Security teams need to build out security and IT strategies that baseline AI usage controls, shadow AI discovery and comprehensive usage analytics.
These pillars are essential enablers of responsible innovation.
It may take a while for regulations to catch up, but it’s essential organizations don’t wait. Ultimately, your employees are using AI - whether you like it or not - acting now can help you understand and control the ‘how’ and ‘where’, making AI usage more secure, without stifling innovation.
We've featured the best AI chatbot for business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
CultureAI's Lead Cybersecurity Researcher.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
TechRadar is part of Future plc, an international media group and leading digital publisher. Visit our corporate site.
©
Future Publishing Limited Quay House, The Ambury,
Bath
BA1 1UA. All rights reserved. England and Wales company registration number 2008885.
Please login or signup to comment
AI Description
The article discusses the pervasive use of AI in workplaces and raises concerns about the security of its usage. It emphasizes the need for businesses to ensure that AI tools are used securely to protect sensitive data and maintain operational integrity.