Companies are embracing AI far faster than any other emerging technology. Many companies started with experimentation but quickly realized that AI can completely transform how work gets done, especially in cybersecurity.
I’ve witnessed this shift unfold rapidly and now see CISOs and security leaders struggling with a critical communication challenge: They must brief their boards clearly and confidently on cyber defense AI use cases and the risks, regulations, and governance challenges associated with them.
Luckily, CISOs and security leaders have a myriad of resources at their disposal. For example, the National Association of Corporate Directors (NACD) and the Internet Security Alliance (ISA) recently published a Director’s Handbook on AI in Cybersecurity. We read through it and have outlined the report’s key points so you can prepare for board conversations and drive AI oversight. Keep reading for more insights. 👇
📥 Download our CISO-to-Board briefing deck: A presentation tool based on NACD-ISA guidance to help security leaders educate and align with their boards on AI.
Not All AI is the Same
Boards often navigate complex risk landscapes—legal, financial, or reputational—but AI is uncharted territory to everyone. I’m not stating anything new when I say that AI is simultaneously a productivity accelerator, a new attack surface, and a tool that adversaries can leverage. The NACD-ISA report also reiterates that AI is both a friend and a foe. That said, CISOs must help the board understand there are different types of AI, each with its own associated levels of risk.
- Traditional rule-based AI is human-programmed and typically deployed in narrow use cases such as spam filters or access controls. These systems are relatively easy to audit, low risk, and more predictable, but their capabilities are significantly limited.
- Machine learning is trained on large datasets and is often used to identify behavioral patterns and anomalies, assign threat scores, and more. While its capabilities are more sophisticated than traditional AI, it is also more vulnerable to bias and data dependency.
- Generative AI and large language models (LLMs) introduce the most disruptive shifts. These systems use massive datasets and self-learning techniques to create new code, text, synthetic data, and more. While powerful, LLMs can hallucinate and produce harmful outputs, or attackers can use them to generate phishing content or malware more easily. They’re also often very opaque, meaning it can be harder to audit decisions.
CISOs need to ensure boards know they must pair AI investments with strategic implementation plans, requiring cross-functional leadership coordination to deploy, monitor, and govern these solutions effectively.
➡️ Have you invested in AI for defensive purposes yet? Check out this blog post covering the top questions CISOs should ask to assess their SOC maturity and AI readiness.
AI Regulations Are Here
If the cyber risks aren’t enough to grab the board’s attention, the regulatory risks should be. Frameworks like the EU AI Act and the NIST AI Risk Management Framework have started to set expectations for transparency, risk-tiering, and accountability. Industry-specific rules are following closely behind—and quickly. It’s no longer an option for boards to “wait and see” what happens, as AI adoption and advancements are moving too fast.
Boards will be expected to demonstrate they have assessed AI risks, implemented appropriate testing, and have processes to disclose any failures, vulnerabilities, or breaches accordingly. To prepare for these AI disclosure mandates, boards and CISOs should collaborate to produce detailed reports on AI use, document controls, and outline how leadership will notify regulators and stakeholders of issues.
⭐The NACD-ISA report recommends a seven-step AI governance program, which we cover more in-depth in our CISO-to-Board briefing deck.
What CISOs and Boards Should Do Now
That said, now is the time for CISOs to spur board action. Here are some immediate initiatives the NACD and ISA recommend taking to drive AI programs:
- Assign Ownership of AI Risk and Compliance: Companies should assign responsibility for AI risk through an audit committee, risk committee, or a newly established technology and innovation committee. This group should stay current on AI developments, oversee implementation risks, and integrate AI governance into enterprise-wide strategies.
- Integrate AI Into Enterprise Risk Management and Apply Risk Quantification: AI-related risks are just as, if not more, serious than any cybersecurity, compliance, and operational risks. That means accounting for AI risks in existing enterprise risk management (ERM) frameworks, such as the COSO ERM, is critical. Coupling ERM frameworks with a risk quantification methodology, such as FAIR, which uses a probabilistic approach to analyze the impact and likelihood of potential threats, can provide a straightforward way to assess cyber risk.
- Ensure Vendor Due Diligence: A lot of AI exposure will come from third-party platforms, tools, or vendors. CISOs must partner with their boards and legal teams to implement rigorous due diligence processes for these external technologies to understand which LLMs or generative AI models are in use and their potential security implications.
There are probably few people more excited about AI’s promise than me. However, the organizations that effectively measure and govern AI will get the most out of it. CISOs who proactively engage boards in AI oversight will help shape an organization’s long-term success.
📥 Download our board briefing slide deck: A resource you can use to bring NACD-ISA-aligned AI governance guidance to your next board meeting.
