- GenAI SaaS usage tripled and operational sales grew sixfold year over year
- Nearly half of users rely on unauthorized “shadow AI,” creating serious visibility gaps.
- The number of confidential data leaks has doubled, with insider threats linked to the use of personal cloud applications.
Generative artificial intelligence (GenAI) can be useful for improving productivity, but it comes with some serious security and compliance challenges. This is stated in a new agency report Netscopewhich states that as the use of GenAI in offices increases, so does the number of policy violations.
In its Cloud and Threat Report: 2026 released earlier this week, Netskope said GenAI Software-as-a-Service (SaaS) adoption among enterprises is “growing rapidly,” with the number of people using tools such as ChatGPT or Gemini, tripling in size throughout the year.
Users are also spending significantly more time with these tools: the number of queries people send to apps has also increased sixfold in the last 12 months, from 3,000 a year ago to more than 18,000 queries per month today.
Shadow AI
Moreover, the largest 25% of organizations send more than 70,000 requests per month, and 1% send more than 1.4 million requests per month.
However, many of the tools and their use cases have not been approved by the relevant departments and managers. Nearly half (47%) of GenAI users use personal AI applications (called “Shadow AI”), which prevents the organization from seeing the type of data being transferred and the tools reading those files.
As a result, the number of incidents where users send sensitive data to AI applications has doubled over the past year.
The average organization now experiences a staggering 223 incidents per month. Netskope also said that personal applications pose a “significant insider threat risk” as 60% of insider threat incidents involve personal cloud application instances.
Regulated data, intellectual property, source code, and credentials are often sent to personal application instances in violation of organizational policies.
“Organizations will struggle to maintain data governance as sensitive information flows freely into unapproved AI ecosystems, leading to increased accidental data exposure and compliance risks,” the report concludes.
“Adversaries, on the other hand, will exploit this fragmented environment by using AI to conduct hyper-efficient reconnaissance and develop highly specialized attacks targeting proprietary models and training data.”
The best antivirus for any budget
Follow TechRadar on Google News. And add us as your preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the “Subscribe” button!
And of course you can also Follow TechRadar on TikTok. for news, reviews, unboxing videos and get regular updates from us on whatsapp too much.






