Most writing on artificial intelligence falls into one of two camps.
The first sells the future. Productivity gains, scientific breakthroughs, the world remade by intelligence on tap. It is mostly correct in its enthusiasm and mostly wrong in its omissions.
The second sells the alarm. Job displacement, autonomous weapons, mass disinformation, a thousand variations on a theme. It is mostly correct in its concerns and mostly wrong in its proportions.
Neither, on its own, is useful to a board director who has to vote on an AI strategy next quarter. Or to an executive whose chief information security officer just walked into the room with a list of unsanctioned AI tools the company’s employees are already using. Or to anyone, really, who needs to make a decision rather than a prediction.
The framework
Every piece I write here applies the same discipline: every promise gets named, and every risk gets named, in the same piece of writing. Not as a rhetorical flourish — “of course, there are risks too” — but as a structural commitment. If I cannot describe the genuine benefit of a capability and the genuine risk it creates, I do not yet understand it well enough to write about it.
This is not balance for its own sake. The technology is genuinely two-faced. A model that can summarise a thousand-page contract in seconds is the same model that can hallucinate a citation that sounds authoritative and is entirely fabricated. A recommendation engine that surfaces useful content is the same engine that, optimised purely for engagement, can radicalise a teenager in months. The promise and the risk are not separate stories. They are the same story, told from two angles.
The audience
I write for the people who have to decide. Board directors who are now expected, often without preparation, to oversee AI risk. Senior executives whose enterprises are deploying AI faster than their governance can absorb. The technology leaders, lawyers, and risk officers who advise them.
The pieces here will be longer than a LinkedIn post and shorter than a consulting deliverable — built to give a busy reader the framing they need before their next governance review. Some will be timed to a regulatory deadline. Some will be standalone analyses. A few will be slower-burning essays on what the AI era means for the discipline of risk itself.
What is coming
The first proper essay arrives in mid-May, ahead of the EU AI Act becoming fully operational. It is a piece on the integration paradox — the gap between what artificial intelligence can do in a vendor demonstration and what it actually does once installed inside a real enterprise, alongside real legacy systems, real people, and real workflows. Two of the major consultancies have arrived at the same diagnosis from different starting points, and the implications for board oversight are larger than most boards have yet appreciated.
After that, a sequence on AI literacy in the boardroom — what directors actually need to know, the frameworks worth their attention, and the questions they should be asking. The EU AI Act’s literacy provisions do not name boards explicitly, but no regulator with eyes on a director’s signature on an AI strategy is going to overlook that gap.
If any of this is useful to you or your board, I would like to hear about it. The contact details are on the contact page. I read every message.
— F.L.