FL Fredrik Lindstrom

Board Governance

What Every Board Needs to Know About AI

A practical guide for directors who want to lead, not just comply.


Hero graphic — What every board needs to know about AI

Most boards are having the wrong conversation about artificial intelligence.

They are asking whether the company should use AI. That question was settled two years ago. AI is already inside your organization. It is filtering job applicants. It is scoring credit risk. It is drafting marketing copy, summarizing legal documents, and flagging anomalies in your financial data. The question is no longer whether AI is being used. The question is whether anyone is governing how it is being used.

I have spent over twenty-five years in cybersecurity and enterprise technology, most recently leading global services sales at a major security company. I have seen every major technology wave arrive in the boardroom: cloud, mobile, IoT, zero trust. AI is different. Not because the technology is more complex — though it is — but because the gap between what boards think AI does and what AI actually does is wider than anything I have seen in my career.

This guide covers the six AI risks that matter most to boards right now, and the practical steps directors can take to govern them effectively.


Risk 1: Reputational Risk — The Board’s Top Concern

Reputational Risk

Reputational risk has emerged as the single most frequently cited AI concern among major corporations. Thirty-eight percent of S&P 500 companies now disclose AI-related reputational risk in their regulatory filings. That number was negligible three years ago.

The reason is straightforward: AI failures are public, immediate, and viral. When a customer service chatbot gives harmful advice, when an AI hiring tool is shown to discriminate, or when AI-generated content contains fabricated information attributed to your brand, the reputational damage moves at the speed of social media. Traditional crisis response playbooks were not designed for this velocity.

For boards, the governance question is clear: does your organization have protocols for AI incidents that account for the speed and visibility of AI failures? Is there a rapid response process specifically for AI-related reputational events? And critically, are you testing your AI deployments for the kinds of failures that generate headlines before they happen?

AI failures are public, immediate, and viral. Traditional crisis response playbooks were not designed for this velocity.


Risk 2: Agentic AI — When AI Acts on Your Behalf

Agentic AI

The next wave of AI is not chatbots. It is agents — AI systems that take actions autonomously on behalf of your organization. They book meetings, process invoices, respond to customer inquiries, execute trades, and manage workflows without waiting for human approval at every step.

This is where the governance challenge escalates dramatically. A chatbot that gives a wrong answer is embarrassing. An AI agent that takes a wrong action — approving a fraudulent transaction, sending a confidential document to the wrong recipient, or making a procurement decision based on hallucinated data — creates immediate financial and legal exposure.

The fundamental board question with agentic AI is accountability. When an AI agent makes a decision that damages the company, who is responsible? The vendor who built the agent? The team that deployed it? The executive who approved the deployment? Current corporate governance frameworks were not designed for autonomous digital actors, and boards need to establish clear accountability structures before agentic AI scales across the organization.

Directors should be asking: what AI agents are currently deployed or planned? What authority do they have to take actions? What are the escalation thresholds that require human approval? And what is the audit trail when something goes wrong?


Risk 3: Shadow AI — The Governance Gap You Cannot See

Shadow AI

Shadow AI is the natural successor to shadow IT, and it is spreading faster than its predecessor ever did. When employees sign up for a free ChatGPT account and paste confidential customer data into a prompt, that is shadow AI. When a marketing team uses an AI image generator without checking the licensing terms, that is shadow AI. When a developer integrates an AI coding assistant into the pipeline without security review, that is shadow AI.

The difference between shadow IT and shadow AI is the nature of the risk. Shadow IT created data security concerns. Shadow AI creates data security concerns plus intellectual property exposure, bias liability, regulatory compliance gaps, and potential confidentiality breaches — all in a single prompt.

Most companies cannot produce a comprehensive inventory of their AI deployments. The board should expect management to provide a regularly updated AI inventory that includes every AI system in use, what decisions it influences, what data it accesses, and who is accountable for its output. You cannot govern what you cannot see.

Shadow IT created data security concerns. Shadow AI creates all of those plus intellectual property exposure, bias liability, and regulatory compliance gaps — in a single prompt.


Risk 4: Fiduciary Liability — The Personal Stakes for Directors

Fiduciary Liability

AI governance is no longer just good practice. Legal experts increasingly characterize it as a fiduciary obligation. Under Delaware’s oversight standards — which govern most major US corporations — directors have a duty to ensure the company has adequate information systems and controls in place to identify and address material risks. AI has become a material risk.

What this means in practice: if your company deploys AI in ways that create significant harm — discriminatory hiring decisions, biased credit scoring, privacy violations, or financial losses from hallucinated AI outputs — and the board had no oversight structure in place, individual directors face personal liability exposure.

This is not hypothetical. The EqualAI AI Governance Playbook for Boards, authored by WilmerHale partners, explicitly states that boards that prioritize governance structures and AI literacy will be better positioned to meet their oversight obligations. The inverse is also true: boards that ignore AI governance are creating a demonstrable gap in their fiduciary duties.

The actionable step is straightforward: ensure AI governance has a formal home at the board level. Whether it sits with the audit committee, the risk committee, or a dedicated technology committee matters less than the fact that someone is explicitly responsible for asking the hard questions consistently.


Risk 5: Regulatory Fragmentation — A Moving Target Across Jurisdictions

Regulatory Fragmentation

The regulatory landscape for AI is fragmenting rapidly, and for companies that operate across borders, this is a governance headache that will only intensify.

The European Union’s AI Act is the most comprehensive AI regulation globally, classifying AI systems into risk tiers with escalating compliance requirements. The United States has taken a more sector-specific approach, with executive orders, NIST frameworks, and state-level legislation creating a patchwork of obligations. China has enacted its own AI regulations focused on algorithmic transparency and content generation. And dozens of other jurisdictions are developing their own frameworks.

For boards, the challenge is that compliance in one jurisdiction does not guarantee compliance in another. An AI system that passes muster under US guidelines may violate EU requirements around explainability or data processing. A hiring algorithm that meets one state’s standards may fail another’s.

Directors should ensure management has a clear view of the regulatory landscape in every jurisdiction where the company operates, a compliance roadmap that anticipates upcoming regulations rather than reacting to them, and legal counsel with specific AI regulatory expertise. The companies that build compliance infrastructure now will move faster when enforcement accelerates — and enforcement is accelerating.


Risk 6: Board-Level AI Literacy — The Foundation for Everything Else

AI Literacy

Every risk in this guide — reputational, agentic, shadow AI, fiduciary, regulatory — requires one thing to govern effectively: comprehension. And the uncomfortable reality is that most boards lack sufficient AI literacy to ask informed questions.

Directors do not need to become data scientists. But they need a working understanding of how large language models generate text (prediction, not retrieval), how AI systems can produce confident but fabricated outputs (hallucinations), how training data biases become output biases, and the difference between AI tools that advise and AI agents that act.

Without this baseline, board oversight becomes performative. Directors nod along during management briefings, approve AI budgets without understanding what they are funding, and miss the warning signs of deployments that are creating risk.

AI literacy at the board level is not a one-time briefing. The technology is evolving quarterly. Models that could barely summarize text two years ago are now writing code, reasoning through multi-step problems, and coordinating with other AI systems autonomously. A board education program that was current in 2024 is already outdated. Ongoing education is the only path to meaningful oversight.

Without AI literacy at the board level, oversight becomes performative. Directors nod along during management briefings and miss the warning signs.


The Promise and the Risk

The Promise

AI, governed well, is the most significant productivity multiplier most organizations have ever had access to. It can compress weeks of analysis into hours. It can give a three-person team the output capacity of thirty. It can surface patterns in data that no human analyst would find. Boards that understand AI and build governance structures that enable responsible deployment will position their companies to move faster, serve customers better, and attract the talent that wants to work with cutting-edge tools.

The Risk

AI, governed poorly or not at all, is a liability accelerator. Biased AI decisions create legal exposure. Hallucinated outputs create reputational risk. Unsecured deployments create cybersecurity vulnerabilities. Autonomous agents create accountability gaps. And regulatory penalties are increasing as governments worldwide pass AI-specific legislation.

The risk is not that AI will fail. The risk is that AI will work exactly as designed — and nobody on the board understood the design well enough to anticipate the consequences.


Where to Start

If your board has not yet established formal AI oversight, here is a practical starting point.

  1. Conduct an AI inventory. Ask management to map every AI system in use across the organization, including third-party tools and shadow AI. This is the foundation everything else builds on.

  2. Assign oversight responsibility. Decide which board committee owns AI governance and ensure the committee has the expertise or advisory support to be effective.

  3. Invest in AI literacy. Schedule ongoing AI education for the full board. Not vendor pitches. Real education about how the technology works, where it fails, and what responsible deployment looks like.

  4. Establish an AI risk framework. Align with an established framework like NIST and ensure management reports on AI-specific risks with the same rigor applied to financial and operational risk.

  5. Map your regulatory exposure. Identify every jurisdiction where AI regulations apply to your operations and build a compliance roadmap that anticipates upcoming requirements.

  6. Set the tone from the top. Make clear that AI governance is not about slowing down. It is about moving fast with confidence. The board’s expectation should be responsible innovation, not cautious avoidance.


Final Thought

Every major technology shift has caught boards underprepared. Cloud computing. Mobile. Social media. In each case, the boards that adapted earliest governed most effectively and created the most value.

AI is following the same pattern, but at a faster pace and with higher stakes. The boards that invest in understanding AI today — not just as a technology, but as a governance responsibility — will be the ones that lead their organizations through this transition rather than reacting to it.

Every promise. Every risk. The truth.