As we approach the end of 2025, there are two inconvenient truths about artificial intelligence that every CISO must take to heart.
Truth #1: Every employee who can uses generative Artificial Intelligence Tools for your work. Even if your company doesn't provide them with an invoice, even if your policy prohibits it, even if the employee has to pay out of pocket.
VP of Product at 1Password and founder of Kolide.
Truth #2: Every employee using generative AI will (or has likely already provided) internal and confidential company information to the AI.
While you may object to my use of the word “everyone,” the consensus data is moving quickly in that direction. According to MicrosoftIn 2024, three-quarters of the world's knowledge workers were already using generative AI at work, and 78% of them were using their own AI tools at work.
Meanwhile, almost a third of all AI users admit to sharing confidential material publicly. chatbots; Among them, 14% admit that they voluntarily disclosed the company’s trade secrets. The biggest threat from AI comes from the general widening of the “access and trust gap.”
In the case of AI, this refers to the difference between approved business applications trusted to access company data, and a growing number of untrusted and unmanaged applications that have access to that data without the knowledge of IT or security teams.
Employees as uncontrolled devices
Essentially, employees are using unmonitored devices that could be running any number of unknown AI applications, and each of these applications could pose a significant risk to sensitive corporate data.
Given these facts, let's look at two fictional companies and their use of AI: we'll call them Company A and Company B.
In both companies A and B, business development representatives take screenshots Salesforce and pass them on to artificial intelligence to create the perfect outgoing message e-mail for their next intended target.
Executives use it to expedite due diligence on recent acquisition targets under negotiation. Sales Reps stream audio and video from sales calls into AI applications to receive personalized coaching and objection handling. Product transactions are loaded Excel sheets of recent product usage data in hopes of finding key information that everyone else has missed.
For Company A, the above scenario is a glowing report to the board of directors on how the company's internal AI initiatives are progressing. For Company B, this scenario presents a shocking list of serious policy violations, some of which have serious privacy and legal implications.
Difference? Company A has already developed and implemented its AI implementation plan and governance model, and Company B is still debating what it should do with AI.
AI governance: from “should” to “how” in six questions
Simply put, organizations cannot afford to wait any longer to understand AI governance. IBM's 2025 Cost of Data Breach report highlights the cost of failing to properly manage and secure AI: 97% of organizations affected by an AI-related breach did not have AI access controls in place.
So now the challenge is to develop a roadmap for AI that promotes productive use and limits reckless behavior. To get an idea of what security might look like in practice, I start each workshop with six questions:
1. What business use cases deserve powerful AI? Think about specific AI use cases such as “compile a bulletin on zero-day vulnerabilities” or “recap an earnings call.” Focus on results, not just using AI for its own sake.
2. What proven tools will we give away? Look for proven AI tools with basic security controls, such as Enterprise tiers that don't use company data to train their models.
3. Where are we on personal AI accounts? Formalize the rules for using personal AI on business laptopspersonal and contractor devices.
4. How do we protect customer data and comply with all contractual provisions while maintaining the benefits of AI? Compare model inputs with privacy obligations and regional regulations.
5. How will we detect fraudulent AI web apps, native apps and browser plugins? Look for shadow AI opportunities using security agents, CASB logs, and tools that provide detailed extensions and inventory plugins in browsers and code editors.
6. How will we teach policy before mistakes happen? Once you have a policy, actively train employees to comply with it; barriers are meaningless if no one sees them before the exit interview.
Your answers to each question will vary depending on your risk appetite, but consistency across legal, product, HR, and security departments is non-negotiable.
Essentially, closing the gap between access and trust requires teams to understand and enable the use of trusted AI applications within their company, so that employees do not gravitate towards untrusted and uncontrolled application use.
Management that learns on the job
Once you've launched your policy, treat it like any other management stack: measure, communicate, refine. Part of the support plan is to celebrate victories and the fame that comes with them.
As your understanding of the use of AI within your organization grows, you should expect to revisit this plan and continually refine it with the same stakeholders.
Final Thought for the Boardroom
Think back to the mid-2000s, when SaaS made its way into enterprises through expense reporting and project tracking tools. IT departments tried to blacklist unverified domains, finance fought back against credit card proliferation, and lawyers questioned whether customer data belonged on “someone else’s computer.” Eventually, we accepted that the workplace had changed and SaaS had become an integral part of modern business.
Generative AI follows the same trajectory, but five times faster. Leaders who remember the SaaS learning curve will understand this pattern: manage proactively, measure continuously, and turn yesterday's gray market experiment into tomorrow's competitive advantage.
Check out our list of the best employee management software.






