Nextcontinent: Four scenarios for the economic and political landscape shaped by AI
Nextcontinent, a global network of consulting firms, has released a comprehensive new report exploring the future of artificial intelligence (AI), outlining four distinct scenarios for the economic and political landscape shaped by AI.
Recently, Nextcontinent has offered overviews of seven megatrends relating to the global shift to renewables; six areas that offer growth opportunities for consulting firms; and four potential geopolitical futures businesses face amid an era of uncertainty. The latest in the Nextcontinent’s thought-leadership applies this to the rise of AI – to highlight four scenarios businesses may need to prepare for in a world “designed and defined by technology”.
With artificial intelligence allegedly improving rapidly at a continuous rate – though claims of the cost of inference, and the results delivered by it vary – the researchers imagined four alternative scenarios, hinging on the outcome in two key debates. The authors believe the future direction of AI depends on the level of autonomy the technology can be said to display. Whether it will consist of algorithms under human control, corrected and guided by people; or be computationally autonomous, capable of self-governance and improvement.
At the same time, things hinge on the degree of concentration that computational power experiences; whether it is centralised among a few dominant monopolies or distributed across a network of actors. In this case, the two possibilities are considered to be AI models controlled by a single centre or few actors; or a multitude of actors each owning their own models, data and learning capacities.
Centralised humanist global AI
The first constellation of these factors brings Nextcontinent to speculate about a future in which strong human supervision is combined with a very centralised AI. Imagining a unified organisation, which would encompass the possibilities of global control, transparency, and worldwide governance of models, this would mean creating an immensely powerful computational commons, which should be framed by a strong and dynamic “digital social contract,” with the ability to tackle singlehandedly the challenges that humanity would point (environmental issues, social justice, or peace issues for instance).
Considering whose hands this supreme, wide-reaching power would – on current likelihood – be placed, there might be cause for concern were this scenario to manifest. Indeed, the researchers admit “there are obviously some risks in concentrating such power” – in this case referencing “universal ethical biases (for instance, a Western-centric vision of humanity)”. However, the systemic shifts AI presents (including the creation of mass unemployment and social upheaval) might not be solvable without a coherent, centralised response (such as the implementation of a universal basic income). And, proponents of this scenario (who are among those in line to be in that centralised governance) conveniently assert that AI will develop in such a way that it can even “compensate for human biases and make better decisions, in a very global and unified way”.
Technocratic AI empire (AI as global brain)
The second scenario takes this a step further, asserting a hypothesis of a global self-evolving AI. Here, AI is conceived as a self-improving system, where human supervision becomes symbolic or emergent.
Admittedly, Nextcontinent says, this “might seem a bit far-fetched as a scenario” – as it presents a picture almost totally absent of human input. But “a few influential thinkers have been developing this perspective for decades” apparently. The paper cites computer scientist Jürgen Schmidhuber – who has envisioned “self-replicating robot factories on the asteroid belt” (though physics and space-travel are not his actual realms of expertise) – to that end.
Again, this is a scenario where there are “obvious risks”, Nextcontinent notes. Most particularly, there is the potential of a “black box” effect – something current users of AI will be acquainted with – with “a poorly designed initial directive”. But there are also perspectives where such a system, if properly designed, could eliminate the inefficiencies and partialities of human rule. The current output of the technology suggests this hypothetical is essentially science fiction for now – but should a number of improbable “what ifs” align, that might not be the case for much longer.
Cultural AIs federation (ecosystem of sovereign AIs)
The core idea behind that scenario is that each nation retains human control over its models (distributed power), with strong human supervision. This vision is closely aligned with the “sovereign AI” perspective, defended by Jensen Huang (Nvidia) and others. With Nvidia being at the heart of the AI gold-rush, it might seem to have a vested interest in sparking national spending sprees in ‘sovereign AI’ – where every nation builds its own AI infrastructure from scratch – in a way reminiscent of the arms manufacturers who stocked the armouries of various global powers ahead of the first world war. But according to the report, in this scenario, AI autonomy will be a vital matter of sovereignty in a digital age.
In this scenario, achieving AI autonomy is viewed as ethical, linguistic, and cultural choices encoded and overseen by human institutions. It also includes local infrastructures supervised by governments or public agencies. This will enable models to be manually “fine-tuned”, toward variants with a greater or lesser capacity to create an integrated network in which different AIs can communicate with one another, but the general idea remains that each cultural and political space must retain autonomy over its meaning-making and decision-making systems. This autonomy is “presented as a key element of genuine sovereignty, and its absence as a clear sign of alienation”, the researchers note.
Autonomous decentralised network (open-source, federated networks, Web3)
Finally, there is the possibility of an ecosystem of ecosystems, where there is no human in the loop at the global level, but each AI remains under some ad hoc human supervision decided locally. While the sovereign-AI scenario sees each nation-state (or political bloc) develops and governs its own AI infrastructure – here, AI systems are distributed, open, and collectively governed by communities, organisations, or networks rather than by states or corporations. No single entity owns or controls the intelligence; instead, many ecosystems collaborate and learn together via shared protocols (such as federated learning or blockchain-based governance).
This scenario depends on a hybrid between human supervision locally and computational autonomy at the global level. Each country, community, or company retains local human control – but due to the apparent potential for AI neutrality and heightened intelligence, here its models will need to collaborate in a self-organized manner on a global scale.
According to Nextcontinent, blockchain could provide a good analogy and early prototype outside the AI domain of what that could entail: autonomous initiatives led by humans that interconnect within a decentralised and algorithmic infrastructure, where supervision is primarily computational and grounded in the founding principles embedded in the system.
Ultimately, however, none of this might come to pass at all. The consultants concluded, “If yesterday’s invisible hand of the market is replaced by algorithms as organisational space and logics for the economic system, competition will happen in this environment. Therefore, the power and relevance of companies will depend less on what they sell, and more on how the dominant AI architectures perceive, trust, and integrate them. Each scenario will display distinct characteristics in relation to these aspects. As a disclaimer, it is obvious that the specific form each scenario might take depends on human choices and systemic shifts – and partly escapes our ability to model the future with certainty.”


