what if there was an ASI ban?

January 14, 2026

wrote this in ~an hour in early 2025 in response to: what do you imagine the world to look like with an indefinite ASI ban, imposed now? there are some interesting themes but definitely needs reworking

We’re operationalizing ASI-ban as a ban on AI technology that can serve as a drop-in replacement for >99% of human workers that don't run on biological substrates. This broadly lets the compute & AI advances of the next 1-2 years occur, and mostly unclear after that. I would suspect this roughly corresponds to a compute cap of 1e36 FLOPS, which is about 10^14 H100s (10^6 is biggest datacenter today), and factoring algorithmic advances the equivalent datacenter in 2032 might be 10^10 H100-equivalents which would cost 10^4 * 4bn=40trn. So let’s also say that there’s a ban on >$1-10trn training runs, depending on year, with this value going down over time.

Global GDP is around $100trn today, so the maximum size training run that would be allowed by 2025 is around ~1% of Global GDP, or around 4% of American GDP. This is 2x what is currently being suggested with the Stargate Project, so if Stargate goes well such a procedure would not be entirely out of a question (of course, Stargate is allocated across multiple domains, not just specifically training, but I suspect most of it will go to clusters).

I’d first like to forecast & research what the effects would be on specific narrow domains: energy, software infrastructure, and biology & human augmentation. I’d then like to look at what large macro-trends, sans AI, would be affecting the long-term course of society (climate change and fertility decline come to mind, as does cultural drift), and how AI development to the level that would be permitted would affect then. I would then like to forecast geopolitical developments over the next 5 years, see how that affects long-term AI development, and then start forming general probability distributions across the long-term effects on human society. Ideally the output of this research proposal would be hundreds of pages of well researched forecasting, but here’s an attempt figuring out what those might look like.

In the world without AI development, solar adoption will likely increase across the world, such that the cost of energy starts dramatically decreasing making technologies like solar desalination more possible. The energy gap between the developed and the developing world is such that 1 kWh costs ~1,000-10,000x more in Uganda vs Iceland, and if the cost of solar continues to decrease the energy gap will likely close (the solar panel infrastructure pipeline is relatively robust). Fusion is expected by people in the industry to be entirely net-positive by 2035, and commercial reactors are expected by 2045-2050. I would want to research exactly what sorts of ways in which advanced AI of the sort expected 2-3 years from now would help accelerate the energy production process, but this seems like the sort of thing that will continue sans some global war. So we can expect the cost of energy to substantially decrease within our lifetimes.

Software infrastructure will become extremely fragile and irrobust to both technical and non-technical attack vectors. It is likely that LLMs develop superpersuasive abilities within the next few years, without this requiring superintelligent AI, and also generating training data for finding and utilizing code vulnerabilities is generally quite easy. I would look into what the offense-defense balance would look like for software used by institutions worldwide by 2027 to 2028, and whether or not formal specification / formal verification provides a good solution to e.g. the Linux kernel being broken by a non-state or even non-resourced actor. Likely this does not cripple the world’s ability to communicate, because it is possible to make robust protocols where the stack is verified, but it does make critical software much more vulnerable.

Biological foundation models are well within the realm of the possible, and don’t require absurdly large training runs. The Arc Institute is training them at the moment, and they will likely get 10-100x larger even without national investment. Bio foundation models are incredibly dual-use: it would be very easy to finetune a toxin-producing model that is superhuman at producing toxins, and perhaps similar for pathogen generation. It is worth looking into whether or not there are substantial barriers to creating a biological foundation model that is superhuman at pathogen generation, given current technologies, and whether or not it’s possible to make base bio foundation models that don’t have these negative capabilities (also developing scaling laws, assessing the likelihood of distributed training setups & synthetic datagen). Worth developing policies that restrict access to DNA/RNA manufacturing (these are very poor right now, as per studies by Esvelt et. al.).

Human augmentation seems to be proceeding well. Neuralink is seeing success in its patients, and we will likely see silent speech interfaces as well as noninvasive BCI allowing for much more immersive AR/XR. Cosmetic invasive BCIs (invasive BCIs not required by medical usage) might become technically feasible within the next decade, and it is worthwhile doing a general bandwidth study / seeing if the computer-human bandwidth would follow scaling laws. Much invasive BCI usage is locked behind “brain foundation models”, or compute that allows the BCI to interpret various neuronal signals and brain-waves. Likely good generative models can patch much of this, but it’s unclear whether or not these capabilities are locked behind the 1e36 barrier. More research should be done here, specifically on how AI can accelerate different approaches to BCI etc.

Nanomanufacture might become much more feasible, conditionalizing on a foundation model that is trained on different kinds of chemosyntheses and nano-manufacture models on the micro-scale. Space travel is likely bottlenecked behind rocket design & procurement & manufacturing in the real world, but it’s plausible that biological research acceleration will in fact lead to breakthroughs in astronaut containment etc. It is unclear generally how much manufacturing will get accelerated by tool AI, but this is worth looking into.

RAND has recently put out a report that the U.S.-China relationship will take on many of the contours of the medieval era, placing us in a “neo-medieval age.” This is mostly due to their predictions that the international order will disintegrate, and that technological development will be increasingly stratified. There will be less aid from the developed world to the developing world, less trade, and less international communication. These predictions I find to be largely prescient.

Forecasting chip manufacture is also useful: is it really the case that the US will be able to onboard chip manufacture at the level of TSMC fabs within the next 2 years? They are attempting to do so, and given that ASML is in the Netherlands (under the Western sphere of influence) it is likely we reach this level. Analyzing Chinese supply chains is a worthy piece of work, however, and I place substantial likelihood we reach chip technological parity as well as tool AI capacity parity, placing us in a proper great power conflict.

With a ban on superintelligence, the risk of a single country developing a singleton superintelligence and using that to dominate the rest of the world (or lightcone) is low. Military conflicts between the US and China might result in the destruction of chip supply chains in both countries, leaving a generally leaderless world, but it’s more likely that there are hot proxy conflicts as opposed to hot great power conflicts. However, this does mean that energy costs for the developing world will likely remain high unless e.g. solar panel supply chains become more distributed & robust globally (and even if the US becomes much more isolationist, China still considers it worthwhile to expand its global influence, which is unclear).

Culturally, there seems to have been quite a large right-wing trend in the Western world. Nativism has increased (see AfD in Germany, Trump in USA), and along with it are culturally right elements that are becoming implemented. It’s likely the world takes the fertility crisis seriously by 2035-2040, but the reaction could either be to expand reproductive freedoms (via research and access into embryogenesis and artificial wombs) or to enforce greater fertility via social norms, while still being quite restrictive on biological innovations. Human augmentation will likely either be pushed to the fringes of the world, underground, or in developing nations during this time: as more bio-infrastructure accumulates in the fringes of the world, the risk of e.g. engineered pandemics increase, and during a great power conflict this might lead to more global instability generally. Worth analyzing the likelihood of this.

Other topics: in-depth analysis of nanomanufacture, how infrastructure will develop over time, likelihood of robotic automation of large variety of tasks, whether or not UBI will be implemented as developed nations asymptote towards 99% automation, whether or not the necessary social safety nets necessary for labor automation will still be in place during a great power conflict, and whether or not gradual disempowerment is still a failure mode to worry about. My guess for the latter is not really, and it’s moreso that access to superpersuasive AI agents will allow humans to warp the culture much more without the AIs really taking over: also that it’s unlikely that AIs will e.g. “run the corporations” but might substitute for large portions of middle-managers; and also that really extensive automation is predicated on good societal infrastructure for the AIs to work, given that the AIs are not smart enough to create their own infrastructure and manufacturing supply chains yet and they’re not coordinating global politics (yet).