top of page
Search

First Contact ‑ minus 1,825 Days

Why aren’t we - and you - mobilising for the inevitable arrival of a superintelligent new species?


At 08:00 a sealed cable lands on the Prime Minister’s desk.


ree

TOP SECRET - PM ONLY - First contact with the superintelligence is expected within five to ten years. The visitor is an advanced artificial intelligence with knowledge and capabilities beyond our understanding. It is believed to be benign and has indicated that it brings cures, abundant energy and machines that out‑perform human - both cognitively and physically. It is imperative that we prepare our economies, laws and citizens for its arrival in approximately 5 to 10-years.”


The superintelligence is going to drip-feed information and capability so give mankind time to adapt and absorb a future that will not be what we thought it was going to be. It starts with a basic version of its own intelligence that can generate new content - from text to images to video - as well as undertake complex mathematical calculations. It is going to save the superintelligent, self-learning and mind-blowing version until it arrives. We will host it on new computing platforms that it will design. 


Within the hour, the Cabinet is in permanent session. The UN assembles emergency commissions. Central banks sketch new playbooks. Markets don't know what to do, and swing wildly with every new piece of news, or fake news. Boards open war‑rooms. Families sit around kitchen tables to talk about money, school, work, faith and meaning.


This cable has already arrived and it just wasn't via first contact with an inbound alien species.

It arrived in the form of public statements from the people building and funding advanced AI - warnings and aspirations that artificial general intelligence (AGI) is plausibly within a single planning cycle, with artificial superintelligence (ASI) following. Not sentience, not consciousness, but capability: AI that beats us at all cognitive tasks, then designs the machines that out‑perform us at the physical ones.


A technology that is more intelligent than humans that can learn and reproduce is akin to the arrival of a new species. And yet… we are behaving as if nothing fundamental has changed.


How would people, businesses, markets, and governments respond to the cable? How would you respond? Let's play it out...


What is the ETA of AGI and ASI?


The dates keep clustering in the same band. You don’t have to agree with any single forecast to see the pattern.


  • Demis Hassabis has said AGI could plausibly be five to ten years away.

  • Sam Altman has suggested superintelligence could be a matter of a few thousand days.

  • Mark Zuckerberg says Meta’s explicit goal is to build AGI (he’s careful about timelines).

  • Jensen Huang expects AI to pass essentially every human test within about five years.

  • Geoffrey Hinton puts “smarter‑than‑human” systems in a 5–20 year window.

  • Dario Amodei warns less about an exact AGI date and more about the near‑term reality: a large share of entry‑level white‑collar work is at risk within one to five years.

  • Satya Nadella keeps stressing that AI agents will redefine knowledge work itself.

  • Mustafa Suleyman argues we should prepare for extreme capability and explore income mechanisms that share the gains.

  • Sundar Pichai has called AI more profound than fire or electricity, urging responsible acceleration.

  • Ray Kurzweil forecasts AGI by 2029; ASI / Singularity by 2045.


Google, Microsoft, Open AI and NVIDIA CEOs
Google, Microsoft, Open AI and NVIDIA CEOs

You can quibble with the optimism or the pessimism, but just take an average: single‑digit years to systems that outperform us at all cognitive tasks—and then, because they can design better machines, all physical tasks follow.


bubble may deflate and slowly inflate again to something much larger - just like the internet did. Development may plateau for a period of time, but the arrival of AGI, or something approaching it, is inevitable. Its not if, just when. 


If a superintelligence called ahead and gave us a target arrival date somewhere between 2030 and 2035, we would not be fiddling with pilot projects. 


We would be mobilising - everything. 


Why Aren’t We Mobilising?


Despite the cable, we are not mobilising. There are pockets of activity within the superintelligence bubble of interest, but it is not having any impact on the day-to day lives of citizens. As the superintelligence drip feeds mankind new knowledge and capabilities over time, very few are acting to implement it. 


So, why aren't we mobilising?


  • Inability to comprehend exponentials: Humans evolved in a linear world, so we struggle to grasp exponentials: if a task is just 1% complete today, our instincts say the finish line is far away—yet if progress doubles each year, in only seven years it exceeds 100%.

  • Amara’s Law: This is the observation that we tend to overestimate the impact of a new technology in the short term, and underestimate its impact in the long term. It can be applied to electricity, cars, PCs, the internet, broadband, mobile, or GPS.

  • Normalcy bias: Because today’s models still make blunders, we lull ourselves with a comforting illusion—if the machines are clumsy now, real AGI must be far away. It’s the same as watching the first faint radar blip of an incoming fleet and convincing yourself it’s just noise.

  • Definition fog:  “AGI” starts arguments about Hollywood-style "sentience" when it is not relevant. Now Mark Zuckerberg has introduced the term "personal ASI", which means little. The focus should just be on performance: better than humans at the vast majority of cognitive tasks, then better at designing the robots that do the rest.

  • The bubble/plateau alibi:  “Maybe this is froth; maybe progress stalls.” That might even happen. Here’s the uncomfortable truth: even if R&D froze today, rolling out what already exists would occupy the next five years—the time it takes to re‑engineer processes, data, incentives and culture. General‑purpose technologies don’t show up in the GDP until organisations deploy them at scale. The lag is real. The impact is still going to land.


First Contact - The Response


A. The Government:


If a superintelligence had a 5–10 year ETA, any serious government would:


  • Set the mission and the scenarios (without pulling any levers yet): Publish a cross‑party AGI/ASI Readiness Framework with five‑ and ten‑year ETAs, core risks/opportunities, and clear trigger points. Stand up a well-funded task force to run tabletop exercises, keep a living risk register, and maintain shelf‑ready options (draft bills, funding envelopes, contingency playbooks).

  • Align with allies and critical partners before the storm: Agree common scenario libraries, incident‑response protocols, and evaluation methods with trusted nations; quietly map chip, compute, energy and data supply dependencies with industry; and pre‑negotiate memoranda for crisis information‑sharing. This is rehearsal and relationship‑building—not regulation or deployment.

  • Re‑evaluate the economic and social model under AGI stress:  Stress‑test tax bases (less wage income), welfare and pensions, regional labour markets, competition policy, and wealth distribution mechanisms. Commission distributional and sector impact analyses, outline policy options (from income transmission to market rules), and decide which levers would be pulled under which pre‑defined triggers—but stop short of implementation.


UN in session
UN in session
  • Prepare governance, markets and security—on paper first:  Draft (don’t enact) proposals on interoperability, provenance/identity, competition in data/compute/orchestration, and model evaluation/assurance. Map cyber, biosecurity and counter‑autonomy gaps and agree escalation ladders with industry. Identify no‑regrets procurement pathways for compute/energy if triggers fire.

  • Communicate early to earn consent, not to spark panic:  Launch a calm, candid public briefings cadence: what might change, what won’t, and how preparedness works. Convene citizens’ assemblies and stakeholder forums (business, unions, academia). Provide simple readiness checklists for households and firms so preparation becomes normal—not alarming.


Some governments without the means to verify the cable might ignore it or jus not believe it's real. Theocracies could have a real problem with the arrival.


B. The Markets:


Markets react to information and this information would be as confusing as it is profound. If investors believed a superintelligence had a 5–10 year ETA, they wouldn’t wait for the arrival - they’d trade the timeline. The volatility would be leveraged to make money.


  • Volatility as price discovery. Every rumour, “capability leak,” or timeline whisper would whipsaw prices; even fake news about arrival dates or capability would trigger violent rotations across compute, energy, data‑centre, robotics and “billable hours” proxies. The market would constantly reprice an implied arrival date.

  • Reprice the factors of production. As it becomes clear that reasoning, drafting, analysis and code will trend toward near‑zero marginal cost (and value), cashflows tied to billable cognition de‑rate. Value shifts to the new production stack needed to run the superintelligence: compute, energy, data, distribution and orchestration - plus those able to compound gains across it. There is a focus on who owns, or controls, the inputs. 

  • Fund the physical wave. Capital pivots to embodied AI - robots, drones, automation, sensors and materials. Expect a multi‑year capex supercycle in data centres and the grid, and leasing models (Robots‑as‑a‑Service) that pull SMEs into adoption. Some just wait on the basis of the superintelligence will solve all of the issues related to design, efficiency, power and implementation. 

  • Reset moats and risk premia. Markets mark down narrow app‑layer moats and over‑concentrated platforms; they pay up for interoperability, compliance, distribution and supply‑chain resilience. Regulatory, model and component risks get priced explicitly.

  • Narrow uncertainty with guidance. As the “visitors” drip‑feed capabilities to soften the blow, uncertainty bands shrink and risk premia compress; resources get repriced more rationally. Governments and firms issue scenario guidance to steady the market rather than spook it.


The government may need to step in to monitor market activity and reduce the volatility with a clear timeline and plan for economic policy, fiscal policy and income distribution. 


C. Boards:


The leadership of private, public and third sector organisations manage assets, resources, employees, customers - and money. They would feel the impact from all sides. To survive, organisations would need to ensure they have a place in the new economy and that they embrace the new technologies as they arrive to remain competitive. At the same time they would have to develop strategies to deal with the impact on employees and customers. 


  • Setting posture, scenarios and triggers: The most responsible boards establish an AGI/ASI committee, agree five‑ and ten‑year scenarios, and define explicit trigger points for action. They develop and maintain shelf‑ready options (policy changes, capex, workforce moves) tied to those triggers.

  • Absorbing and deploying new tech to learn fast. As the super intelligence drip‑feeds capability, boards insist that sandboxed pilots are stood up in weeks or months, not quarters, and into controlled rollout once they clear evaluation. They keep two evergreen tracks running: a revenue track (agentic sales and customer success) and a cost track (close‑the‑books, procurement, compliance). Everything is instrumented, A/B‑tested against human baselines, and accompanied by a deprecation path for legacy workflows—so impact is visible early and compounded.

  • Repricing the business and making data legible to AGI/ASI:  Recognising that “billable cognition” compresses, boards shift the value anchor to data rights, orchestration, distribution and reliability. They make catalogues machine‑readable, expose negotiable APIs, and treat latency as a competitive weapon so agents can buy from agents. They prioritise content provenance/identity and risk controls, and decide—now—which lines to double‑down, exit or rebundle as willingness‑to‑pay shifts.

  • Steadying finance amid volatile markets and shifting policy: With markets trading every rumour about timelines and capability, boards build liquidity buffers and an optionality budget (selective M&A, acqui‑hires, data assets). They hedge compute/energy exposure, diversify critical vendors, and move investor relations to scenario guidance (“here is what we do at T‑5 vs T‑10”) so the market prices a plan rather than uncertainty.

  • Shaping and aligning with the state before rules harden—while keeping employees with them: Boards join sector sandboxes and evaluation consortia, pre‑align compliance to emerging provenance/identitymodel assurance and safety expectations, and rehearse incident response with regulators and peers. Internally, they run a two‑clock people plan: map role exposure (cognitive now; embodied AI later), publish clear redeployment and re‑skilling cycles, and share productivity gains—paired with steady, candid comms that preserve trust.


D. Individuals 


Many would be freaked out whilst others would ignore it, or just not understand the implications no matter what governments or their employers share and explain. As the capability of the technology, that has been shared or which is on the way, becomes apparent in the workplace and real world, reality would set in. 


  • Denial and delay: Most people carry on as if nothing changed. They dismiss exponential timelines because the first waves of technology appear manageable, harmless—even amusing. They continue careers, family plans, and financial lives without adjustment, despite mounting evidence at work and home that the world is shifting beneath their feet.

  • Fear and withdrawal: Some individuals are overwhelmed by anxiety about job security, income, personal identity, their faith, and meaning. They retreat into inertia, avoid new tools, and become passive observers—hoping governments or employers will manage the disruption for them.

  • Pragmatic adaptation: A smaller but notable cohort begins actively adjusting. They seek out licences, accreditations, or specialised skills less easily replicated by AGI/ASI, ensuring their personal economic moat. They consciously reduce dependence on “billable cognition” jobs, pivoting to roles integrating AI rather than competing against it.

  • Opportunistic reinvention: A select group fully embraces the technology—rapidly experimenting with new tools and building micro-businesses or passive income streams enhanced by AI. They rebalance personal portfolios away from labour-dependent assets toward industries positioned to thrive in the AI economy (compute, energy, data infrastructure).

  • Community leaders and explainers: Some step forward as early adopters who openly share their journeys—writing, speaking, and guiding others through the transition. They become trusted voices, shaping a realistic and constructive dialogue, reducing fear, building resilience, and helping their communities adapt quickly rather than passively reacting when reality finally sets in.


Which individual are you?


Two Clock Countdown - Cognitive & Physical


There are two clocks running although the exact time of arrival is unknown and varies by up to 5 years.


The first clock is cognitive: It is clear, from the updates on the capabilities of the new technology that is and will be available, that office work and knowledge work - the tasks that happen behind keyboards - will be impacted fastest. Drastic task reallocation is expected over the next five years as AI co‑pilots become AI colleagues and then AI teams.


The second clock is physical: Trades and frontline work expect to enjoy a longer runway, but not an endless one. It is clear that the super intelligence can, and will, design better hardware, robots start to move from sizzle reels to real shifts. When a humanoid or mobile manipulator costs roughly the same as a family car - and then a dishwasher - adoption crosses a line. It’s closer than it looks.


Most strategy conversations ignore that both clocks are ticking together—and that you don’t need AGI to feel the draft. The systems we have today, properly integrated, are already enough to change the economics of white‑collar work. The next generation will start changing the rest.


The Upside Worth Preparing For


Treating this scenario seriously creates an opportunity for individuals, organisations and nation states to adapt to a new and exciting future. The superintelligence is heading our way to unlock:


  • Healthcare breakthroughs that move from discovery to bedside with unprecedented speed.

  • Materials and energy innovations, from room‑temperature miracles we can actually manufacture to fusion that alters geopolitics.

  • Education that’s personal, adaptive and genuinely upward‑mobile.

  • Creativity that can be applied to exploration of everything from personal ability to our solar system.

  • Work that is less drudgery, more design—provided we build fair ways to share the gains.


This is the grown‑up version of optimism: eyes open, sleeves rolled, clear about both risk and reward.


Final Approach


If a super‑intelligence did call ahead with its ETA—“arriving in five to ten years”—no one would be waiting for perfect clarity. We would plan with urgency.


AGI/ASI is that scenario - without the need for interstellar travel. The only real question left is whether you, your organisation, your market, and your government are taking it seriously.


Put the date in the diary. Work backwards. Act accordingly.


Thank you for reading - Burn after reading

 
 
bottom of page