Over the past couple of years, many organizations have found comfort in a single slide or paragraph that reads something like this: “We use artificial intelligence [AI] responsibly” The line might be enough to pass informal supplier due diligence in 2023, but it won't survive the next major round of tenders.
Enterprise buyers, especially in government, defense and critical national infrastructure (CNI), are now active users of AI themselves. They understand the language of risk. They make the connection between artificial intelligence, data security, operational resilience and supply chain vulnerability. Their procurement departments will no longer ask if you use AI. They will ask how you manage it.
The AI question is changing
In practical terms, questions in the request for proposals (RFP) and Invitations to Tender (ITT) are already changing.
Instead of a bland “Do you use AI in your services?” you can expect wording like:
“Please describe your controls over generative artificial intelligence, including data sovereignty, human oversight, model accountability, and compliance with relevant data protection, security, and intellectual property obligations.”
Behind this line lies a number of very specific problems.
Where does customer or citizen data go when you use tools like ChatGPT, Claude, or other hosted models?
In what jurisdictions is this data located or located?
How are AI results reviewed by humans before they impact a critical decision, advice, or safety activity?
Who owns and can reuse clues and results, and how is confidential or sensitive material protected in this process?
The general pattern no longer answers any of these points. In fact, he touts the lack of structured management at all.
The unfortunate reality for many service providers is that marketing language aside, most professional services organizations use AI in a very familiar way.
Individual employees have implemented tools to speed up development, analysis, or coding. Teams share advice in an informal setting. Some groups have written local guidelines about what is acceptable. Some policies have been updated to mention AI.
What's often missing is evidence
Very few organizations can say with certainty which customer interactions used AI assistance, what categories of data were used in the suggestions, what models or vendors were involved, where those vendors processed and stored the information, and how review and approval of AI results were recorded.
From a governance, risk and compliance (GRC) perspective, this is a challenge. This includes data protection, information security, records management, professional liability, and in some sectors, security and mission assurance. It will also guide you through every future tender as buyers increasingly ask about past AI-related incidents, potential pitfalls and lessons learned.
Why is this so important for government, defense and CNI
In central and local government, police and justice, AI is increasingly influencing decisions that directly affect citizens. This could be triaging cases, prioritizing audits, supporting investigations, or generating policy analysis.
When AI is involved in these processes, government agencies must be able to demonstrate legitimacy, transparency, fairness and accountability. This means understanding where AI is used, how it is controlled, and how results are questioned or ignored. Suppliers in this area are expected to demonstrate the same discipline.
In the defense and national security supply chain, the stakes are even higher. AI is already being used in logistics optimization, predictive maintenance, analytics fusion, learning environments and decision support. The issues here are not just about privacy or intellectual property. It's about reliability under stress, resistance to manipulation, and ensuring that sensitive operational data doesn't leak into systems outside of sovereign or approved control.
CNI operators face a similar problem. Many are exploring artificial intelligence to detect anomalies in OT environments, forecast demand, and respond automatically. A failure or misfire here can quickly result in a service failure, safety issue or environmental impact. Regulators will expect operators and their suppliers to view AI as an operational risk rather than a new tool.
Across all of these sectors, organizations that cannot explain their AI governance will quietly fall into the evaluation matrix.
Turning AI Governance into a Business Advantage
The good news is that this picture can be changed. Governance of AI, if done well, does not mean slowing down or inhibiting innovation. It's about creating enough structure around the use of AI so that you can explain it, defend it, and scale it.
A practical starting point is to assess procurement readiness using AI. IN Advent chatWe'll put it very simply: Can you answer the questions your next big client is going to ask?
This includes mapping the uses of AI in your services, identifying which workflows touch customer or citizen data, understanding which third-party models or platforms are involved, and documenting how people control, approve, or override AI outputs. This also means looking at how AI fits into existing systems for incident response, data breach handling and risk registers.
From this, you can develop a concise, fact-based narrative that fits neatly into RFP and ITT responses, supported by policies, process descriptions, and sample logs. Instead of waving your hands about responsible AI, you can present a clear story about how AI is managed within your broader security and GRC framework.
ISO 42001 as a framework for AI governance
ISO IEC 42001a new standard for AI control systems sets such a work structure. It provides a framework for managing AI throughout its lifecycle, from design and acquisition to operation, monitoring and decommissioning.
For organizations that already have an information security management system (ISMS), quality management system or confidential information management system in place, 42001 should not feel foreign. It can be integrated with existing ISO 27001, 9001 and 27701 standards. Roles such as senior information risk owner (SIRO), information asset owner (IAO), data protection officer, service managers and system owners are simply given clearer responsibilities for AI-related activities.
Compliance with 42001 also signals to customers, regulators and insurers that AI is not treated informally. This shows that there are defined roles, documented processes, risk assessments, monitoring and continuous improvement around AI. Over time, this alignment may be expanded to formal certification for those organizations where it makes commercial sense.
Bringing people, processes and assurance together
Policies and frameworks are only part of the picture. The real test is whether people in the organization understand what is allowed, what is not allowed, and when they need to ask for help.
Therefore, training in AI security and governance is critical. Employees must understand how to handle requests containing personal or sensitive data, how to recognize when AI results may be biased or incomplete, and how to catch their own missteps. Managers need to know how to approve use cases, sign off on risk assessments, and respond to AI-related incidents.
Put all this together and you get something very simple, but very powerful. When your next RFP or ITT has an AI questions page, you won't have to look for one-off answers. You will be able to describe an AI governance system that meets recognized standards, is integrated with existing security and GRC practices, and is supported by training and evidence.
In a crowded service market, this can be the difference between being seen as an interesting supplier and being trusted with valuable and sensitive work.






