AI Is Now a Fiduciary Issue
- Piers Linney

- Feb 14
- 10 min read

Corporations exist because society grants them legal privileges, and in return they are expected to create economic and social value while managing the risks their activities generate - there is a social contract. Boards exist to govern capital, risk and long-term value creation on behalf of stakeholders (from shareholders to the environment), ensuring the organisation is steered competently through material opportunities and threats with appropriate oversight, controls and accountability, not merely to “support management” in the abstract.
Artificial intelligence now sits at the centre of that responsibility.
Yet in many organisations, AI is still framed as a project: a pilot programme, an innovation stream, a digital initiative, something to be reviewed when the diary allows. It gets discussed occasionally, often by the wrong people, at the wrong altitude, on the wrong cadence. It remains too easy for boards to nod, ask for a slide deck, and move on to the next item.
That approach is already obsolete and introduces risk.
AI implementation and optimisation is not a finite programme. It is a permanent operating discipline that will underpin operational efficiency, productivity, revenue growth, competitiveness, risk management and regulatory compliance. It will also shape who you can hire, what you can retain, and how quickly you can adapt when the ground shifts again.
When the intelligence layer of the organisation is evolving exponentially, and the technological frontier is accelerating even faster, scheduling a board discussion on AI strategy six months out is pointless; by the time the calendar invites are sent, and certainly by the time the board papers are circulated, the assumptions underpinning the discussion will already be out of date.
AI is not a “technology topic”. It is a fiduciary issue.
The Pace Mismatch Boards Need to Confront
Most boards operate on fixed cycles.
Four to eight board meetings a year. Annual strategy refreshes. Committees structured around familiar categories like audit, risk, remuneration and nominations. That cadence evolved for a world in which change was meaningful but comparatively slow, and in which competitive advantage came from execution against a plan.
AI does not respect this outmoded rhythm.
AI capability improves on a compounding curve. Meaningful capability shifts happen inside quarters. It will soon be months. Then weeks. Then days, hours and minutes. Costs fall, reliability improves, integration becomes easier, and what was “not ready for prime time” becomes usable much faster than most governance calendars can absorb.
Conversations are increasingly anchored in assumptions that are already outdated. Not because anyone is incompetent, but because the governance cycle lags the rate of change. The greater risk is that the board may not even recognise that its own briefing papers are already out of date. In an exponential environment, governance lag becomes strategic lag that compounds risk and forfeits competitive advantage.
This widening cadence mismatch is not a minor procedural problem. It is now a board-level risk in its own right, because it creates a structural gap between how fast the organisation’s environment is changing and how slowly the organisation is making, reviewing and correcting core decisions.
Capacity × Intelligence Is The New Performance Function
For most of modern corporate history, corporate performance was driven by capital and labour, directed by management. Productivity was a function of human capability, amplified by the machines people built, programmed and operated. Strategy defined direction and resource allocation; execution translated it into outcomes.
That equation is shifting.
A more accurate framing for what is happening is: Capacity × Intelligence.
Capacity is not merely financial capital. It is the organisation’s aggregate productive power: people, expertise, data, systems, physical assets and the architecture that connects and deploys them. Increasingly, it includes access to compute and direct access to scalable intelligence, and in some sectors, robotic execution is the next frontier.
Intelligence is no longer limited to human cognition. It is becoming deployable, scalable and increasingly inexpensive. It can be embedded across workflows and decisions, from pricing and forecasting to compliance and operations. As Dario Amodei, CEO of Anthropic, described in his essay, Machines of Loving Grace, we are effectively building “nations of geniuses in data centres.” When cognition can be provisioned at scale, intelligence shifts from being scarce and human-bound to being infrastructural. The competitive advantage moves from hiring more smart people to orchestrating scalable intelligence better than your competitors.
When intelligence scales independently of headcount, productivity diverges fast. Two organisations can have similar workforce sizes but radically different cognitive throughput. One will be able to analyse more signals, make decisions faster, run more experiments, detect problems earlier and allocate resources more precisely.
This is why AI cannot remain “a tool for the CTO or CISO to distribute”. It is now a driver of enterprise-wide advantage and enterprise-wide failure modes. Boards do not get to ignore the primary variables of performance.
Commodified Cognition and the Collapse of Scarcity
Something more fundamental is happening.
High-quality cognition has historically been scarce. Specialised knowledge work, the kind that powers strategy, modelling, engineering design, legal analysis, complex forecasting and sophisticated commercial judgement, has been expensive because it depended on rare individuals with long training curves. They have historically sat at the top of the pay scale.
That scarcity is collapsing.
As AI systems improve and become more accessible, specialised reasoning becomes increasingly replicable by anyone. The economics of expertise change. This does not mean expertise disappears, but its scarcity premium compresses. The role of experts shifts from producing analysis to setting direction, validating outputs and making higher-order trade-offs.
In practical terms, specialised knowledge work becomes less of a structural moat. If cognition is commoditised, advantage does not accrue to the organisation with the largest number of specialists. It accrues to the organisation that can orchestrate scalable intelligence across the enterprise: embedded in the right processes, governed with the right controls, measured properly, and tied directly to outcomes.
Many boards are still thinking too narrowly, or simply tacking “AI” onto the technology agenda item. Treating AI as another line item could prove fatal, especially in the information economy.
They see AI as productivity software rather than a structural shift in the economics of human capital. When cognition becomes cheap and scalable, the organisation’s value creation model changes. So does its risk profile.
And this is only the early phase of that shift.
AGI, ASI, And Why “Timelines” Are The Wrong Governance Question
E
xecutive conversations about Artificial General Intelligence (AGI) often degrade into debates about likelihood and timing, which is a comfortable way to avoid uncomfortable action.
The governance question is not “will AGI arrive in 2026 or 2029”. The governance question is whether materially more capable general systems are plausible within one or two business planning cycles, and what structural readiness looks like if they are.

If generalised machine intelligence — systems that are faster, cheaper and increasingly capable across a broad range of cognitive tasks — emerges within that horizon, domain-level problem solving accelerates. Fields such as mathematics, physics, engineering, drug discovery, robotics design and materials science will see deeper AI-driven optimisation and discovery. Whether you label the system “AGI” is less important than what it does: broader reasoning, improved planning, deeper abstraction and the ability to generalise across tasks.
Beyond that lies Artificial Superintelligence (ASI), a qualitatively different proposition: not merely a better tool, but a form of intelligence that could exceed human reasoning across most domains. The most grounded way to frame ASI at board level is not as science fiction, but as the emergence of a new form of cognition — effectively, a new species.
If a large company knew that a new intelligent species had arrived, or was plausibly emerging within its strategic planning horizon, capable of reasoning, analysing and acting at scale, it would not relegate the issue to a single agenda item at the next annual strategy retreat. It would convene immediately, educate itself, establish oversight and treat the development as materially relevant to the organisation’s future.
I explored this idea in more depth in my newsletter on the arrival of this “new species”, which you can read here.
My analogy is deliberately stark because it exposes the core governance failure: too many boards are applying, at best, linear oversight habits to a potentially non-linear shift in the nature of intelligence itself.
This is why organisations that are not prepared to implement AI will fall behind those that are. The gap is not only about today’s tools. It is about structural readiness. AI-native organisations are building the muscle, data, culture, controls and operating cadence required to deploy capability upgrades rapidly. When more powerful systems arrive, they integrate faster. Everyone else is trying to change the engine while the race is already under way.
Catching up in exponential growth is very difficult. The gap continues to widen at an accelerating pace, while you are debating how to close it.
Robotics Extends Intelligence Into Physical Labour
Too many leaders still treat AI as purely digital.
That is a category error.
Robotics is advancing rapidly, and leading manufacturers have stated ambitions to build robots in the millions, potentially billions over time. As AI systems improve, they do not just generate text or analysis; they increasingly coordinate physical execution.
A humanoid robot costing $20,000, financed over five years at commercial rates, could equate to roughly $15–$20 per day before maintenance and energy. As manufacturing scales and robots increasingly manufacture components and even other robots, those costs will fall further. Over time, the unit economics may begin to resemble durable consumer appliances rather than specialist industrial equipment.
When that happens, physical capacity scales in the same way access to intelligence has scaled.
Digital cognition will orchestrate logistics, inspection, warehousing, manufacturing, maintenance and field service. Physical throughput will grow without linear growth in headcount.
For boards, this changes capital allocation, labour strategy, operational risk and resilience planning. AI is no longer just embedded in software workflows, it is now embodied and can understand and navigate our world with a myriad of sensors collecting data and reporting back to digital managers. AI becomes an active decision layer interacting directly with the physical world.
If you are not governing AI today as a core operating capability, you will be even less prepared when it governs physical capacity at scale.
AI Augmentation And Competition for Talent
Another material risk sits in plain sight: talent.
The best talent is not primarily worried about AI taking their job. They are focused on how to use AI to improve how they do their job and to remain relevant in the labour market.
High performers think in terms of leverage, not replacement.
They do not want to spend their careers reformatting reports, trawling emails, producing routine analysis or sitting inside administrative friction that AI can remove. They want tools that make them faster, sharper and more effective. They want the mundane automated and the meaningful elevated.
AI-first organisations provide that leverage. They enable people to operate at a higher level of abstraction and impact. They increase individual productivity and expand cognitive reach.
If you do not provide an AI-augmented environment, the labour market will notice. The strongest candidates will choose organisations where their output is multiplied, not constrained. Over time, that creates divergence: AI-native firms attract higher-calibre talent, that talent designs better systems, those systems further increase productivity, and the cycle reinforces itself.
Boards already understand that leadership quality determines long-term performance. AI augmentation now sits alongside succession and capability planning as a structural talent variable.
The deployment of digital workers is not an HR and IT initiative. It is a governance decision about how competitive your organisation intends to be.
The Irreversibility problem
Boards must treat AI as core to the discharge of their fiduciary duties, not because of hype, but because of path dependence.
This is not about adopting tools. It is about redefining how the organisation understands capacity, cognitive labour, physical labour, implementation and optimisation. AI is an ongoing capability that compounds daily. Strategic choices made today about operating models, systems architecture, data strategy and talent determine whether that compounding works for you or against you.
In a linear environment, delay is inconvenient. In an exponential one, it compounds.
If one organisation restructures around scalable intelligence while another waits, productivity, insight, speed and data advantages diverge quarter by quarter. By the time the slower organisation moves, it is not just adopting new tools; it is attempting to re-architect how work is conceived and governed while competitors are already compounding advantage.
Waiting or positioning yourself as a cautious second mover is a risky strategy.
Boards are not just supervising results. They are setting trajectory, and in an exponential environment, trajectory may be hard to reverse, especially if the competition also have velocity.
What Governance Needs To Look Like Now
If AI is now a fiduciary issue, governance has to evolve accordingly. This is not a call for bureaucracy. It is a call for clarity, competence and a cadence aligned with reality.
Last year, I spoke to a group of FTSE 350 Chairs about the AI-powered future. What struck me was not disagreement, but distribution. There was a clear linear relationship between understanding and reaction. The deeper the technical understanding, the more strategic and pragmatic the response. The shallower the understanding, the more the reaction drifted toward fear or dismissal. The leaders of well-known technology and consultancy businesses just nodded at me in agreement. Others, who have never used AI, dismissed even its current capability. That alone tells you something important: literacy shapes governance quality.
Most boards need to be explicit about where AI oversight sits. Call it a dedicated 'AI Committee' or expand an existing mandate. The name does not matter. What matters is that AI stops being a vague technology topic and becomes owned, governed and accountable.
Whatever committee you create, it cannot be a board-only conversation. AI governance must run in a matrix: vertically from board to frontline, and horizontally across functions. This is about optimising how everyone works, not just senior decision-making. If you do not understand how AI is changing day-to-day execution, you are not really governing it.
Effective oversight requires three structural shifts.
Accountability must be explicit: Who owns AI-driven value creation? Who owns AI risk? If “everyone” owns it, no one does.
Board literacy must be substantive: Directors do not need to become engineers, but they do need to understand and have experienced the basics, and have sufficient understanding to interrogate capability, limitations, model risk, controls, incentives, data governance and regulatory exposure. Education cannot be episodic; it must be continuous.
Cadence must match the pace of technological advancement: Oversight cannot sit on a quarterly rhythm while the underlying technology evolves monthly. Boards need mechanisms for more frequent updates, faster escalation and quicker strategic adjustment where required. Not more meetings for the sake of it, but governance aligned with exponential change.
The Competitive Reality Most Executives Misread
A final question cuts through everything: do you have a clear plan to create an AI-first, AI-enabled, AI-augmented organisation?
And be honest about who you think your competitors are. Is it the familiar names you have known for years, the companies you see at conferences, the incumbents you have competed with across your career (or even worked for)?
In knowledge work and the information economy, the competitors that will hurt you most may not look like that at all. They may be building businesses similar to yours with 25 percent of the workforce, not because they are reckless, but because they have designed their operating model around intelligence rather than headcount. They automate the mundane, scale cognition and equip every team member with tools that multiply output.
That structural difference compounds. The longer you wait, the more the gap hardens into systems, data, talent and culture. Catching up in an exponential curve is extremely difficult.
AI is not a project. It is a permanent operating capability that will define competitiveness, risk, compliance and talent outcomes. Boards that treat it as a delegated technology agenda item are not matching oversight to materiality. Boards that redesign governance, literacy and cadence around scalable intelligence are doing what boards are meant to do.
Governing the future is now part of the job.
So ask yourself: when is your next board or senior leadership session where AI is treated not as a technology update, but as a core strategic variable?
Thank you for reading.


