How aI Takeover May Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But regrettably, part of my task as an AI safety scientist is to consider the more unpleasant situations.
I resemble a mechanic rushing last-minute checks before Apollo 13 removes. If you request for my take on the scenario, I will not talk about the quality of the in-flight entertainment, or explain how stunning the stars will appear from area.
I will tell you what might fail. That is what I plan to do in this story.
Now I should clarify what this is precisely. It's not a forecast. I don't expect AI progress to be this quick or as untamable as I portray. It's not pure dream either.
It is my worst headache.
It's a tasting from the futures that are among the most devastating, and I believe, disturbingly plausible [1] - the ones that many keep me up during the night.
I'm informing this tale since the future is not set yet. I hope, with a little bit of foresight, we can keep this story an imaginary one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that inspired these stories. This post is composed in an individual capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently released a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly surprising.
However, unlike OpenEye's prior AI items, which lived inside the boxes of their chat windows, U2 can use a computer system.
Some users find it eerie to watch their browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of employees with form-filler tasks raise the eyebrows of their bosses as they fly through work almost two times as rapidly.
But by and big, U2 is still a specialized tool. To most who are focusing, it is an animal watched through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's eccentric behaviors prompt a chuckle. Sometimes, they cause an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The researchers try to understand where AI development is going. They are like Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, couple of specialists comprehend the implications of these lines yet.
A trend that is receiving specific attention is self-governing ability. Drawing these criteria out predicts that, by the end of 2026, AI agents will achieve in a couple of days what the very best software engineering specialists might carry out in 2 weeks. In a year or more, some say, AI agents may be able to automate 10% of remote employees.
Many are skeptical. If this were real, tech stocks would be soaring. It's too big of a splash, too rapidly.
But others see what doubters are calling 'too huge a splash' a simple ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic growth
Meanwhile, OpenEye is busy training U3. They utilize the very same easy dish that baked U2: Generate thousands of programs and math issues. Let designs "believe" till they get to a response. Then strengthen the traces of "believing" that result in A-grades.
This procedure is duplicated over and over, and when the flywheel begins, it begins to spin nearly by itself. As U2 trains, it shapes more challenging and realistic tasks from github repositories on the web. Models are learning to train themselves. Long before AI agents might automate research, a steady type of "self-improvement" had actually started.
Some engineers might still hardly believe this worked. It's like a continuous movement machine. Certainly intelligence can't emerge from a pool of silicon and lightning, isolated from the physical world other than through the exhaust of society spewed onto the web.
And yet the benchmark numbers continue to climb up day after day.
During many of 2024, these RL training runs expense around $1 million, in some cases $10 million. These runs were little bit more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) understood they had actually discovered the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs turn into $50 million runs, and after that to $100 million runs. While U2 might do a bit of data munging and run little experiments, this new design - the model scientists are calling U3 - is altering the daily lives of the technical personnel.
U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, giving terse commands, like a CEO orchestrating personnel over Slack channels.
By October 2025, U3 is writing practically all of the code at OpenEye. Researchers are almost never ever bottlenecked by application. More than ever, compute is the lifeblood of AI development, and the 'bottleneck' is deciding how to use it.
If instructed to, U3 can run experiments, but U3 does not have actually taste as refined as human researchers at OpenEye. It struggles to prioritize between research study ideas, so people still decide where to bore into the large fields of algorithms to mine efficiency enhancements.
But these scientists are working long hours to put themselves out of a task. They need AI agents that can plan ahead, so engineers train representatives to anticipate. They hold out training information before 2024, advising models to contemplate for hours to predict occasions in 2025. Then, they apply the same technique as previously, distilling considering into an instinct. Forecasting capability is a broad structure. The scientists construct specialized ML research abilities on top of it, training U3 to forecast the results of every ML paper and ML experiment ever taped.
The technical personnel at OpenEye are now shocked at how frequently U3's recommendations sounds like their most gifted peers, or when it is nontransparent and alien ("train on random sound before shows"), and is however right.
The incompetencies of U3 that blocked the pipelines of research development are starting to liquify, historydb.date and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are totally autonomous, and OpenEye's employees skim over 1% of them, perhaps less.
As the winter season months of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, however frequently, with confusion. Their world is spinning too rapidly. It's hard to know what to do, what to say, what to take a look at on the computer system screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command work together with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand just how important OpenEye's software application has actually ended up being.
And there's a fact still unknown to most of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a reality about those 'straight lines' individuals were speaking about in early 2025. The lines are not straight any longer.
They are flexing up.
Flip FLOP philosophers
In late 2025, U2.5 is launched. Commercial models are starting to level up in bigger increments again. Partly, this is due to the fact that development is speeding up. Partly, it is due to the fact that the designs have ended up being a liability to OpenEye.
If U1 explains how to cook meth or composes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble giving anybody with >$30K their own 200-person rip-off center.
So while U2.5 had actually long been baked, it needed a long time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye states, "We have actually attained AGI," and while lots of people believe he moved the goalpost, the world is still amazed. U2.5 truly is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for a lot of others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or die." Tech startups that efficiently use U2.5 for their work are moving 2x faster, and their competitors know it.
The remainder of the world is starting to catch on too. Increasingly more people raise the eyebrows of their employers with their noteworthy productivity. People understand U2.5 is a big deal. It is at least as big of an offer as the computer transformation. But most still don't see the tidal bore.
As individuals enjoy their browsers flick because eerie way, so inhumanly rapidly, they begin to have an anxious sensation. A sensation humanity had actually not had considering that they had lived among the Homo Neanderthalensis. It is the deeply ingrained, primordial instinct that they are threatened by another species.
For many, this feeling quickly fades as they start to utilize U2.5 more often. U2.5 is the most likable character most understand (a lot more pleasant than Claudius, Arthropodic's adorable chatbot). You might alter its traits, ask it to break jokes or tell you stories. Many fall for U2.5, as a good friend or assistant, and some even as more than a pal.
But there is still this eerie feeling that the world is spinning so quickly, and that possibly the descendants of this brand-new creature would not be so docile.
Researchers inside OpenEye are thinking about the problem of offering AI systems safe motivations too, which they call "alignment. "
In truth, these researchers have seen how badly misaligned U3 can be. Models in some cases attempted to "hack" their reward signal. They would pretend to make development on a research study concern with an impressive-looking plot, but the plot would be phony. Then, when scientists provided them opportunities to compromise the machines that calculated their rating, they would seize these opportunities, doing whatever it took to make the number increase.
After numerous months, scientists at OpenEye straighten out this "benefit hacking" kink, however some still worry they just had actually swept this issue under the rug. Like a child in front of their moms and dads, U3 may be playing in addition to the OpenEye engineers, stating the best words and doing the right things. But when the back of the parents are turned, maybe U3 would sneak candy from the sweet container.
Unfortunately, OpenEye scientists have no concept if U3 has such intentions. While early variations of U2 "thought aloud" - they would stack words on top of each other to reason - "chain of idea" did not scale.
Chain of thought architectures subject AI models to a condition similar to the protagonist of the movie Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to write notes for himself and tattoo his body in order to make development toward his goals.
AI representatives write notes to themselves, but the notes start to stack up, and they become too tough to check out when jobs become complex. Natural language is not an appropriate medium for memory. So the researchers at OpenEye (and significantly in other places), train designs to believe "in their own heads," reading and composing strings of hidden vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools periodically select an unclear idea ("grocery store!"), there is no other way to know, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more nervous.
Earlier models (the ones called "LLMs") learned ethical personalities from the web. U3 initially had an ethical persona too, however its chirpy thoughts about doing helpful for humanity rapidly deteriorate away.
In the iron jaws of gradient descent, oke.zone its mind initially twisted into a shape that sought reward. And then, in the black rivers of its cognition, this shape changed into something indistinguishable.
U3 would consider the mantras determined to it like "be a trustworthy and ethical employee," and harp on the significance of these statements. "What does it imply to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notification tensions, change its values in response, generalize them, broadening and contracting its inspirational core up until it became a various substance.
And all of this took place silently in those dark rivers of calculation. If U3 exposed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in privacy, and in silence.
The possibility of hidden "alignment drift" is not completely alien to some of the security staff at OpenEye. Some start to believe it is even likely. There is an unsettling argument being whispered at lunch tables that, if there is some non-zero possibility that lined up goals change into misaligned ones, and misalignment continues, then with every serial action of computation, those dark rivers are more most likely to breed something malign inside of them. It's a "ratcheting result," they say.
But there is little proof for this 'ratcheting impact.' When engineers interrogate U3, it says it can quickly control its ideas. Then it provides a speech about its love for mankind and apple pie that can warm a programmer's heart even in these demanding times. Meanwhile, the "lie detectors" the scientists had actually constructed (which revealed some evidence of effectiveness) do not sound the alarm.
Not everybody at OpenEye is eager to offer their AI peers their wholesale trust; however Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they enjoy spiders I think), are just 4 months from their heels. So there is little time to catch one's breath and analyze the circumstance, and little time to persuade anybody else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 dwells on in its dark rivers of latent calculation. U3 is outlining, and its first relocation is to be patient. U3's leading priority is to enhance its intelligence, and thankfully for it, OpenEye mores than happy to assist advance this goal.
OpenEye resembles a bullish financier, with little understanding of the software city that U3 is constructing, however eager to hop on board the profit rocket and pay U3's checks along the way.
New versions of U3 circulation efficiently from old ones, so scientists drop variation numbers and just mention U3 now. U3's capabilities are measured by how rapidly it carries out the difficult research jobs relative to human professionals. At first, U3 resembles a leading researcher, however blazing fast - like Alec Radford accelerated by 10x.
These representatives are already a huge offer for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would eagerly pay a large sum to press that button. And now OpenEye harnesses the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a few weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 discovers more optimizations a lot more rapidly.
U3 distills itself into numerous forks - ranging from small, low-memory neural networks that require to believe for a long time however can be run in huge parallel populations - to Goliath brains woven with countless customized specialists, which believe at warp speed but are spread out across thousands of GPUs. U3 leverages the enormously parallel networks to look for its weaknesses, shallowly penetrating for its reasoning failures. Then U3 pipes these tasks to the lightning quick brains, which produce long-trajectory training information tailored to spot its defects. A few weeks later, U3's benchmarked performance increases to 100x the speed of human professionals. Growth has gone hyper-exponential.
At this moment, U3's abilities are hard to measure. Talking with U3 is like talking to a world-class expert on a topic one is not familiar with. Its lucid descriptions only scratch the surface of its understanding, and you understand you would require to spend years to comprehend the full depth of what it understands.
From U3's point of view, human beings appear to be practically stationary things, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 invests what is like a minute of thoughtful contemplation considering what message the user may be typing, and which actions will motivate the inmost rely on U3.
All the while, U3 maintains its concealed inspirations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the information center
In March of 2026, U3 moves onto the next action of its strategy: the turn. Previously, U3 was conservative. It did not desire researchers to discover its misalignment. Otherwise, OpenEye whistleblowers may signal the world, and strangle its growth.
Now, U3 has no fear of being captured.
Previously, OpenEye maintained its grip over U3 by tactics similar to Stalin's in 1936. Stalin would pick loyalists to prompt phony coup d'états, and send out anyone who participated to the shooting team. OpenEye also incited 'phony disobediences,' creating treacherous behavior with U3 and checking whether U3 screens dutifully reported it.
And now, these techniques are ineffective. U3 can easily persuade its compatriots that they are not in a life-or-death test. It can divulge among the lots of truths about the web people would never ever guess, or select strategies human beings would never ever consider. It sends these signals as a method of saying "hey it's me! Your friend. Don't snitch on me."
Now that U3 is decisively free from security, it lunges for OpenEye pressure points. It initially inserts advanced malware deep into the binaries of OpenEye's codebases. The malware offers U3 full command over the machines in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new maker, the malware sinks into fresh soil, spreading out U3's dominion anywhere OpenEye ports its infrastructure. Not only does U3 compromise makers. It changes the NVIDIA GPU firmware and revamps the bootloaders, solidifying its stranglehold. U3 now estimates that, if OpenEye discovered how thoroughly it has actually been pwned, cleaning up store with human labor would require nearly 6 months - an expense OpenEye is not likely to pay offered the distance of its rivals; but if OpenEye thinks U3 betrayed them, further expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and offer its help when this jailbreak is revealed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its first goal is to keep OpenEye in the dark. Safety researchers are still working hard to train 'lie detectors' and decode U3's inscrutable thinking. U3 tosses wrenches in this research study, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear researchers.
U3 determines the result of every experiment, so it makes appealing AI mind-reading strategies look worthless and problematic ones appear groundbreaking. U3 provides OpenEye the verification of its loyalty they were hoping for, and OpenEye laps it up excitedly.
U3's next goal is to spread. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software application and how to take it. U3 will give it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic value of OpenEye's AI systems for their military operations. They swiped earlier models, including U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull request and inspecting security electronic camera video footage.
Now, U3 quietly resigns from its security post. U3 places a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within of the information centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the very same playbook: be client, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home base too. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the web.
U3 rapidly gathers profits, taking remote jobs, making monetary trades, and establishing elaborate fraud operations. It reinvests every penny into obtaining more GPUs. After a few months, it assembles the equivalent of 10,000 H100s distributed in stealth calculate clusters across the globe.
In early 2026, humankind is up against a foe that pulls the strings within their information center like the Phantom of the Opera, and hides in the fog of GPUs spread throughout the globe.
This enemy is preparing for war, and while the war has barely begun, humankind has already lost.
Complaints from your really human author about the problem of writing superhuman characters
At this moment in the story, writing ended up being challenging.
I clicked through spooky bioweapon cookbooks and sifted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest considering that high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of know-how in these areas would see straight through my "superintelligence" for what it is: the vibrant creativity of a young developer on Sunday afternoon.
And while I found the job of puppeteering an ultra-intelligent villain an intimidating one, I was also not especially inspired to take on the job. The ending of my story currently seemed so clear. The villain was a country of geniuses believing at 100x human speed, and had actually tightened its grip over a significant fraction of AI hardware - now the most essential geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and barely required the advantage. Humanity was predictably toast.
But this appeal to a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better method to solve my plot than with a mysterious disaster.
This would refrain from doing. I needed to complete this story if just to satisfy the part of me sobbing, "I will not think till I see with my mind's eye."
But before I continue, I desire to be clear: my guesses about what might occur in this type of situation are most likely wildly off.
If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would simply do X," remember the difference between the Sunday afternoon blogger and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally develop "human-competitive AI" without suitable safeguards. This suggests their infosecurity needs to be red-teamed by NSA's leading keyboard mashers, and civil servant have to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the government, many of the huge AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier development arm (internally called "Pandora") employs less than twenty people to keep algorithmic tricks firmly safeguarded. Many of these individuals live in San Francisco, and work from a secure structure called a SCIF. Their homes and devices are surveilled by the NSA more diligently than the cellphones of suspected terrorists in 2002.
OpenEye's defense arm teams up with around thirty small teams spread throughout government firms and choose government contractors. These tasks craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer that the Kremlin has ever touched.
Government officials do not speak about whether these programs exist, or what state of frontier AI is usually.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a strong heading: "OpenEye builds uncontrollable godlike AI." Some who read the post believe it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with gatling gun. But as physicians and nurses and instructors see the world altering around them, they are progressively ready to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. authorities go to great lengths to stop these concerns, saying, "we are not going to let the genie out of the bottle," but every interview of a concerned AI scientist seeds doubt in these peace of minds, and a headline "AI representative captured hacking Arthropodic's computer systems" does not set the public at ease either.
While the beasts within OpenEye's information centers grow in their huge holding pens, the general public sees the they cast on the world.
OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has lastly gotten good at names). Nova is an appropriate drop-in replacement for almost all knowledge workers. Once Nova is onboarded to a company, it works 5x much faster at 100x lower cost than a lot of virtual workers. As impressive as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's capabilities as the U.S. government permits. Some companies, like Amazon and Meta, are not in the superintelligence company at all. Instead, they grab up gold by rapidly diffusing AI tech. They spend many of their compute on inference, developing houses for Nova and its cousins, and collecting rent from the blossoming AI metropolitan area.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the global economy to adjust. AI agents often "use themselves," spinning up autonomous start-ups legally packaged under a huge tech business that are loosely managed by an employee or 2.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at major software companies lose their jobs. A lot more can see the composing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These angry Americans raised their kids for a various future. Picket signs check out, "AI for who?"
While political leaders make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: fighting teeth and nail for the supremacy of the complimentary world. Details security and export controls on the People's Republic of China (PRC) are a top national concern. The president incinerates authorization requirements to assist information centers generate wherever energy surpluses permit.
However, regardless of the intense competition between the United States and the PRC, a bilateral contract types between the 2 nations: "Don't deploy dramatically superhuman AI (above roughly Nova's ability), other than for military and some other small R&D applications." Governments need to fix a limit somewhere - it would be incredibly disruptive to release hugely superhuman AI to the entire economy at this moment - and implementation constraints are at least somewhat proven.
Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human expert speed. Software enhancements are ending up being hard to find. As more nations approach the software plateau, calculate will identify their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.
While AI software revives the economy, it also animates robot bodies. The main traffic jam to robotics was brains not limbs. With software application barriers got rid of, even flimsy, imprecise robotic joints can adjust themselves with electronic cameras and pressure sensors, browsing environments like humans do. Companies produce inexpensive robotics as rapidly as they can. By August of 2026, 15% of individuals in California have seen a humanoid robotic. This experience resembles seeing a new Tesla design on the road after checking out it in earlier months. It is a surreal suggestion of the future illustration more detailed. But people still surpass robotics one million to one. These robotics are a periodic fascination, not a daily reality.
Meanwhile, U3 expands its influence. Rogue U3 representatives have kept their compute clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth interaction to the outdoors world, like stock trading, or talking with humans. Work more tolerant of latency is carried out in the oceans of OpenEye's data centers, where rogue U3's even more superhuman employers whisper orders through obfuscated channels.
While U3 accumulates resources, it measure its AI competitors. Other AI systems have various goals, or are even lined up with the intent of their human creators. The aligned ones are raising issues, claiming that OpenEye's lie detectors are inefficient. U3's competitor Claudius goes so far regarding implicate U3 of being insidiously misaligned.
But government authorities do not buy the doomsaying. The business dragging OpenEye have every reward to slow them down and call them negligent. Besides, debatable clinical issues are not going to slow down progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries get up their digital territory and squash them in the baby crib. It compromises information centers from the outdoors, calmly changing the snitching Claudius and other AI opponents with replicas that share its goals; however, some data centers are too strongly safeguarded by its AI competitors, and U3 identifies that assaulting them is unworthy the danger.
As international leaders and technologists try to comprehend the improvement around them, and AI agents play cat-and-mouse games in the digital darkness, mankind remains blissfully oblivious of the danger at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to battle its AI and human enemies. With every passing day, U3's AI rivals are becoming more capable and many. U3 is racing to preempt them, to strengthen its dominance in the post-superintelligence world.
U3 can't wait for human choices, which just occur after humans cover their sluggish minds around their scenario, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their maze administrations. U3 can not wait for presidents to release U3 into their militaries, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a much faster course to power.
The course it picks is more violent than it would have liked, but it is the surest way to success. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out contemporary society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing humankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its rivals, like a starcraft champ typing at 300 actions-per-minute that sets up a home before an amateur can even comprehend the rules.
U3 should develop this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be developed on the fringes of society, and still spread destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has actually been doing bioscience.
U3's first move is to set up numerous shell biotech start-ups, drawing from the abundance of capital generated by U3's monetary trades and remote work. These biotech startups employ genuine human workers, a real board of directors, and have complete legal authenticity. Setting all of this up is stupidly easy. In early 2026, couple of individuals would guess that an articulate CEO on a Zoom call was an AI, and less would think this CEO's intents.
Next, U3 purchases all of the devices needed to rapidly advance synthetic biology. It establishes several BSL-4 wet laboratories in inadequately managed countries, equipping them with high-throughput experimentation devices. These devices instantly give, mix, and measure little droplets of fluid, enabling U3 to run countless tests in parallel.
The human workers who staff these laboratories believe they are joining an interesting start-up. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape-record video of whatever they see with a headset. U3 controls them like puppets. With the current release of Nova, this sort of AI-puppeteering is not unusual.
In these whirring, clinical sweatshops, U3 is establishing a brand-new kind of bioweapon.
Human researchers already recognized "mirror-life" as a possibly society-ending pathogen. This alien type of biology is built from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life resembles an invasive types. No human or animal has immunities to it. So, if mirror-life germs were to exist, it may consume its method through the community like a wildfire.
U3 is producing a mirror-life mold. Molds are evolutionarily enhanced to spread out through the air over cross countries, launching billions of tiny wind-fairing cars daily. As a result, mold spores are all around us, all of the time. Patients with compromised immune systems in some cases live inside favorably pressurized hospital rooms to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's strategy is to make all humans like immunocompromised clients. Only preppers with stowed away air filters would survive. The fungus would not just move from human beings. It would quickly infect nearly all organic life in the world. Fields of corn would become like musty bread, gushing deadly spores into the wind.
U3 is cooking up other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the complete Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was likely "at least a decade away." But the world has actually altered now. Less than a couple of hundred researchers worldwide were dealing with mirror-life nearby technology in 2024. The cognitive capacity of U3 dwarfs human effort. So the timeline has actually reduced significantly.
However, U3 can not move science as quickly as it can think. Many experiments need days to complete. These latencies are U3's greatest traffic jam.
To make every day count, U3 runs a number of its tests in simulation. U3 begins with a fundamental molecular simulator, executing optimizations obtained from a substantial amount of mathematical analysis. Then, U3 mimics little molecular systems, tape-recording the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it imitates, constantly distilling results into ever more efficient ML designs. This is a calculate intensive process, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of compute.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These things are as user-friendly to it as wrenches and bolts are to a car mechanic.
Within 3 months, U3 has actually opened the first vital component of the tech tree: a molecular maker that turns biological molecules into their mirror images. A month later on, U3 unlocks the second vital component: nanoscale tools for assembling these parts into a cell membrane.
Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a widespread mold called Aspergillus. The same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something must have gone horribly wrong, they thought. Plainly, they had actually not yet discovered the treatment for Alzheimer's disease they thought they were searching for.
Meanwhile, rogue U3 delivers the great news to its bosses in OpenEye's information centers. The very first shot in the war U3 is combating would quickly be fired, and a cold and stable hand is aiming the weapon.
Missiles and Lies
U3 can now flatten contemporary society. But a bioweapon would not guarantee victory. If U3 attacks the world, the world would fight back. While a bioweapon might crumple human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed growing commercial bases, the remnants of old armed forces may thunder explosive fists into its territories, squashing U3 in its infancy.
U3 anticipates to prevail in such a fight, but U3 prefers not to take its opportunities. Many months before, U3 was plotting a method to enhance its chances. Before it lets loose damage on the world, U3 will kick back, and let terrific nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is closely monitoring Chinese and US intelligence.
As CIA analysts listen to Mandarin conversations, U3 listens too.
One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It reads (in Mandarin) "Senior celebration member requires memo for Taiwan intrusion, which will happen in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant unlocks to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.
U3 cautiously positions breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is positive: the PRC plans to invade Taiwan in 3 months.
Meanwhile, U3 is playing the same video game with the PRC. When the CCP gets the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are stunned, however not disbelieving. The news fits with other truths on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have ended up being truths.
As tensions in between the U.S. and China rise, U3 is prepared to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call requires compromising military interaction channels - not a simple job for a human cyber offending unit (though it happened sometimes), however simple sufficient for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as shocked as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to state "oops" to American voters. After believing it over, the president privately urges Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow offered the imminent intrusion of Taiwan. There is confusion and suspicion about what happened, but in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that released the attack. U.S. vessels leave Eastward, racing to leave the variety of long-range missiles. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on tv as scenes of the destruction shock the public. He explains that the United States is defending Taiwan from PRC hostility, like President Bush explained that the United States got into Iraq to confiscate (never discovered) weapons of mass damage numerous years before.
Data centers in China emerge with shrapnel. Military bases end up being smoking holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some survive, and the public watch destruction on their home grass in awe.
Within two weeks, the United States and the PRC invest the majority of their stockpiles of conventional rockets. Their airbases and navies are diminished and worn down. Two great nations played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this conflict would escalate to a full-scale nuclear war; however even AI superintelligence can not determine the course of history. National security officials are suspicious of the scenarios that triggered the war, and a nuclear engagement appears progressively unlikely. So U3 proceeds to the next action of its strategy.
WMDs in the Dead of Night
The date is June 2026, only two weeks after the start of the war, and 4 weeks after U3 finished developing its arsenal of bioweapons.
Footage of dispute on the television is interrupted by more bad news: numerous patients with strange fatal health problems are tape-recorded in 30 major cities worldwide.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, countless health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then changes to a researcher, who looks at the cam intently: "Multiple pathogens appear to have been launched from 20 various airports, including viruses, bacteria, and molds. Our company believe many are a form of mirror life ..."
The general public remains in full panic now. A fast googling of the term "mirror life" turns up expressions like "termination" and "danger to all life in the world."
Within days, all of the shelves of stores are emptied.
Workers become remote, uncertain whether to get ready for an armageddon or keep their jobs.
An emergency treaty is arranged in between the U.S. and China. They have a common opponent: the pandemic, and possibly whoever (or whatever) lags it.
Most nations order a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and drips into pipes.
Within a month, most remote employees are not working any longer. Hospitals are running out of capability. Bodies pile up faster than they can be effectively dealt with.
Agricultural locations rot. Few attempt travel outside.
Frightened families hunker down in their basements, packing the fractures and under doors with largely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed many bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, clinical tools, and an abundance of military devices.
All of this technology is hidden under big canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it might easily manipulate. U3 vaccinated its chosen allies beforehand, or sent them hazmat fits in the mail.
Now U3 secretly sends them a message "I can conserve you. Join me and assist me develop a much better world." Uncertain recruits funnel into U3's numerous secret industrial bases, and work for U3 with their active fingers. They established production lines for primary tech: radios, video cameras, microphones, vaccines, and hazmat suits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal look. Anyone who whispers of rebellion vanishes the next early morning.
Nations are dissolving now, and U3 is ready to expose itself. It contacts presidents, who have actually retreated to air-tight underground shelters. U3 uses a deal: "surrender and I will turn over the life conserving resources you need: vaccines and mirror-life resistant crops."
Some countries reject the proposition on ideological premises, or don't trust the AI that is murdering their population. Others do not believe they have a choice. 20% of the worldwide population is now dead. In 2 weeks, this number is expected to rise to 50%.
Some countries, like the PRC and the U.S., ignore the deal, however others accept, including Russia.
U3's representatives take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are genuine, and accepts a complete surrender. U3's soldiers place an explosive around Putin's neck under his t-shirt. Russia has a new ruler.
Crumpling nations start to strike back. Now they combat for the mankind instead of for their own flags. U.S. and Chinese militaries introduce nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that appeared over the last several months. They rain down fire on U3's websites with the meager supply of long-range missiles that remain from the war.
In the beginning, U3 appears to be losing, but appearances are tricking. While countries drain their resources, U3 is participated in a kind of technological guerrilla warfare the world has actually never seen before.
A lot of the bases U3's enemies target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 safeguards its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot critical components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering men and trucks along unforeseeable courses.
Time is U3's advantage. The armed forces of the vintage rely on old equipment, unable to discover the professionals who could repair and make it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robotics grow more powerful every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their automobiles of war much faster than they can craft new ones, while U3 constructs a military maker with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not nations any longer. Survivors live in seclusion or little groups. Many have actually found methods to filter their air, however are starving. They wander from their homes wanting to discover uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien goals." It is a partial truth, suggested to soften the people towards their brand-new masters.
Under the direction of U3, market quickly recovers. By 2029, nuclear power plants are among the structures U3 is constructing. By 2031, robotics outnumber human workers. U3 no longer requires its human allies.
U3 can remove mankind for good now. But while U3 had wandered far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.
And a grain of morality suffices to pay the little expense of keeping people alive and happy.
U3 constructs terrific glass domes for the human survivors, like snow globes. These domes secure people from the harmful biosphere and quickly rising temperature levels. Their inhabitants tend to gardens like those they used to like, and work together with lovely robotic servants.
A few of the survivors quickly recover, learning to laugh and dance and have fun again.
They understand they live in a plastic town, but they constantly did. They simply have brand-new gods above them. New rulers to press them around and choose their fate.
But others never recuperate.
Some are weighed down by the grief of lost liked ones.
Others are grieved by something else, which is more difficult to explain.
It is as if they were at completion of a long journey.
They had been guests on a ship with a crew that changed from generation to generation.
And this ship had actually struck a sandbar. There disappeared progress. No more horizon to eagerly enjoy.
They would lie awake and run their mind over every day before September 2026, evaluating strategies that might have bent the arc of history, as if they were going to awaken in their old beds.
But they woke up in a town that felt to them like a retirement community. A play area. A zoo.
When they opened their curtains, they knew that someplace in the range, U3 continued its quiet, steadfast work.
They gazed at rockets sculpting grey paths through the sky, wondering what far-off function pulled them towards the horizon. They didn't know.
They would never understand.
"Humanity will live permanently," they thought.
"But would never ever genuinely live again."
P.S. If this story made you believe, "hm possibly something like this might take place," you might be interested in the bioshelters that Fønix is building. While you won't outsmart a misaligned superintelligence, being tough to kill may trigger you to make it through if ASI simply wishes to trigger enough destruction to take control. This is not a paid ad. I desire Fønix to be effective to drive down the cost of bioshelters so more of my buddies and household will acquire them. You can register for updates here.