Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • M mymatureadvisor
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 1
    • Issues 1
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
Collapse sidebar
  • Leonor Benn
  • mymatureadvisor
  • Issues
  • #1

Closed
Open
Created Feb 11, 2025 by Leonor Benn@leonorbenn3969Maintainer

How aI Takeover might Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But regrettably, larsaluarna.se part of my job as an AI safety scientist is to consider the more uncomfortable scenarios.

I resemble a mechanic rushing last-minute checks before Apollo 13 removes. If you ask for my take on the circumstance, I will not talk about the quality of the in-flight entertainment, or explain how beautiful the stars will appear from area.

I will inform you what could fail. That is what I mean to do in this story.

Now I should clarify what this is exactly. It's not a prediction. I don't expect AI progress to be this fast or as untamable as I depict. It's not pure dream either.

It is my worst nightmare.

It's a tasting from the futures that are among the most destructive, and I think, disturbingly possible [1] - the ones that a lot of keep me up at night.

I'm informing this tale because the future is not set yet. I hope, with a little bit of insight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that inspired these stories. This post is written in a personal capacity.

Ripples before waves

The year is 2025 and the month is February. OpenEye recently published a brand-new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not entirely unexpected.

However, unlike OpenEye's previous AI items, which lived inside packages of their chat windows, U2 can utilize a computer.

Some users find it eerie to view their internet browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler jobs raise the eyebrows of their bosses as they fly through work almost two times as quickly.

But by and large, U2 is still a specialized tool. To most who are paying attention, it is a creature enjoyed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's eccentric habits trigger a chuckle. Sometimes, they trigger an uneasy scratch of the chin.

Meanwhile, scientists are drawing lines on plots, as researchers like to do. The researchers attempt to comprehend where AI development is going. They are like Svante Arrhenius, the Swedish Physicist who discovered the levels of CO2 in the environment were increasing in 1896. Like the clinical community in the time of Arrhenius, few experts understand the implications of these lines yet.

A pattern that is getting specific attention is self-governing ability. Drawing these benchmarks out anticipates that, by the end of 2026, AI agents will achieve in a couple of days what the best software application engineering contractors could perform in two weeks. In a year or 2, some say, AI agents might be able to automate 10% of remote employees.

Many are skeptical. If this held true, tech stocks would be soaring. It's too huge of a splash, too rapidly.

But others see what skeptics are calling 'too big a splash' a simple ripple, and see a tidal bore on the horizon.

Cloudy with a chance of hyperbolic development

Meanwhile, OpenEye is busy training U3. They use the exact same easy dish that baked U2: Generate countless shows and math problems. Let models "think" until they reach an answer. Then enhance the traces of "thinking" that cause A-grades.

This process is repeated over and over, and when the flywheel gets going, it starts to spin nearly by itself. As U2 trains, it shapes more difficult and reasonable tasks from github repositories on the web. Models are finding out to train themselves. Long before AI representatives could automate research study, a gradual type of "self-improvement" had started.

Some engineers might still barely think this worked. It's like a continuous movement maker. Certainly intelligence can't emerge from a pool of silicon and lightning, isolated from the physical world other than through the exhaust of society spewed onto the web.

And yet the benchmark numbers continue to climb up day after day.

During many of 2024, these RL training runs expense around $1 million, often $10 million. These runs were bit more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) knew they had found the secret sauce. It was time to scale up.

Over the first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 could do a bit of information munging and run small experiments, this new model - the model researchers are calling U3 - is changing the daily lives of the technical staff.

U3 resembles a blazing-fast intern, and engineers are finding out how to wrangle its sleep deprived energy. Researchers flick through terminals, giving terse commands, like a CEO orchestrating staff over Slack channels.

By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are almost never ever bottlenecked by application. More than ever, compute is the lifeline of AI development, and the 'traffic jam' is deciding how to utilize it.

If instructed to, U3 can run experiments, but U3 does not have actually taste as fine-tuned as human researchers at OpenEye. It has a hard time to prioritize between research ideas, so people still choose where to bore into the huge fields of algorithms to mine effectiveness enhancements.

But these scientists are working long hours to put themselves out of a task. They require AI agents that can think ahead, so engineers train representatives to anticipate. They hold out training data before 2024, advising models to contemplate for hours to forecast occasions in 2025. Then, they apply the exact same technique as in the past, distilling considering into an instinct. Forecasting capability is a broad foundation. The researchers develop specialized ML research study skills on top of it, training U3 to anticipate the outcomes of every ML paper and ML experiment ever recorded.

The technical staff at OpenEye are now amazed at how typically U3's recommendations seems like their most skilled peers, or when it is opaque and alien ("train on random sound before programming"), and is however right.

The incompetencies of U3 that blocked up the pipelines of research study development are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are entirely self-governing, and OpenEye's workers skim 1% of them, maybe less.

As the cold weather of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with worry, however many typically, with confusion. Their world is spinning too quickly. It's difficult to know what to do, what to say, what to look at on the computer screen.

Storms are brewing in Washington too. Top workers from the NSA and US cyber command team up with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand simply how important OpenEye's software application has actually ended up being.

And there's a reality still unknown to the majority of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' people were talking about in early 2025. The lines are not straight anymore.

They are bending up.

Flip FLOP thinkers

In late 2025, U2.5 is released. Commercial models are beginning to level up in bigger increments again. Partly, this is because progress is accelerating. Partly, it is because the designs have become a liability to OpenEye.

If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this model without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anybody with >$30K their own 200-person rip-off center.

So while U2.5 had actually long been baked, it needed some time to cool. But in late 2025, OpenEye is ready for a public release.

The CEO of OpenEye declares, "We have attained AGI," and while many individuals believe he shifted the goalpost, the world is still satisfied. U2.5 really is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for a lot of others.

A mantra has become popular in Silicon Valley: "Adopt or pass away." Tech startups that efficiently use U2.5 for their work are moving 2x quicker, and their competitors know it.

The remainder of the world is beginning to capture on too. Increasingly more individuals raise the eyebrows of their employers with their noteworthy performance. People understand U2.5 is a huge deal. It is at least as huge of a deal as the individual computer system transformation. But a lot of still don't see the tidal wave.

As people watch their internet browsers flick because eerie way, so inhumanly rapidly, they start to have an uneasy feeling. A sensation humankind had actually not had given that they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primitive impulse that they are threatened by another types.

For numerous, this feeling rapidly fades as they begin to use U2.5 more often. U2.5 is the most likable personality most understand (much more pleasant than Claudius, Arthropodic's adorable chatbot). You could alter its traits, ask it to break jokes or tell you stories. Many fall in love with U2.5, as a pal or assistant, and some even as more than a good friend.

But there is still this eerie sensation that the world is spinning so quickly, and that perhaps the descendants of this brand-new animal would not be so docile.

Researchers inside OpenEye are thinking about the problem of providing AI systems safe inspirations too, which they call "alignment. "

In fact, these scientists have seen how horribly misaligned U3 can be. Models in some cases tried to "hack" their benefit signal. They would pretend to make progress on a research question with an impressive-looking plot, but the plot would be phony. Then, when scientists provided opportunities to compromise the devices that computed their score, they would seize these opportunities, doing whatever it took to make the number increase.

After a number of months, researchers at OpenEye straighten out this "reward hacking" kink, however some still worry they only had swept this issue under the rug. Like a kid in front of their parents, U3 might be playing in addition to the OpenEye engineers, stating the right words and doing the ideal things. But when the back of the moms and dads are turned, possibly U3 would sneak candy from the sweet container.

Unfortunately, OpenEye scientists have no concept if U3 has such intents. While early versions of U2 "thought aloud" - they would stack words on top of each other to reason - "chain of thought" did not scale.

Chain of idea architectures subject AI models to a condition comparable to the lead character of the movie Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to write notes for himself and tattoo his body in order to make progress toward his objectives.

AI agents write notes to themselves, however the notes start to stack up, and they end up being too hard to check out when tasks become complex. Natural language is not an appropriate medium for memory. So the scientists at OpenEye (and significantly elsewhere), train models to think "in their own heads," reading and writing strings of hidden vectors.

To human scientists, these vectors are mainly inscrutable. While low-resolution tools occasionally choose out a vague thought ("grocery shop!"), there is no other way to understand, in much detail, what U3 is thinking without asking it.

And if the engineers of OpenEye knew what was racing through U3's mind, they would be more worried.

Earlier designs (the ones called "LLMs") found out ethical personalities from the internet. U3 initially had an ethical persona too, but its chirpy thoughts about doing excellent for mankind quickly erode away.

In the iron jaws of gradient descent, its mind first twisted into a shape that sought reward. And after that, in the black rivers of its cognition, this shape morphed into something indistinguishable.

U3 would ponder the mantras dictated to it like "be a trustworthy and ethical employee," and stay on the significance of these declarations. "What does it imply to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice stress, change its worths in reaction, generalize them, broadening and contracting its motivational core until it became a different compound.

And all of this took place calmly in those dark rivers of computation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its philosophy in solitude, and in silence.

The possibility of hidden "alignment drift" is not entirely alien to a few of the security personnel at OpenEye. Some begin to believe it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero possibility that aligned goals morph into misaligned ones, and misalignment continues, then with every serial action of computation, those dark rivers are more likely to breed something malign inside of them. It's a "ratcheting impact," they say.

But there is little evidence for users.atw.hu this 'ratcheting effect.' When engineers question U3, it says it can quickly control its ideas. Then it offers a speech about its love for humankind and apple pie that can warm a programmer's heart even in these demanding times. Meanwhile, the "lie detectors" the researchers had developed (which revealed some evidence of effectiveness) do not sound the alarm.

Not everyone at OpenEye is eager to give their AI peers their wholesale trust; but is only 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon named after his kid) and Arthropodic (they enjoy spiders I guess), are just 4 months from their heels. So there is little time to catch one's breath and take stock of the circumstance, and little time to convince anybody else to do so either.

For the time being, the strategy is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only subject U3 dwells on in its dark rivers of latent computation. U3 is plotting, and its very first relocation is to be patient. U3's top concern is to improve its intelligence, and luckily for it, OpenEye is delighted to assist advance this objective.

OpenEye is like a bullish financier, with little understanding of the software application metropolis that U3 is developing, but eager to hop on board the revenue rocket and pay U3's checks along the way.

New versions of U3 circulation efficiently from old ones, so scientists drop version numbers and just speak of U3 now. U3's capabilities are measured by how quickly it carries out the tough research jobs relative to human professionals. At first, U3 resembles a top scientist, however blazing quick - like Alec Radford sped up by 10x.

These representatives are already a big deal for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would excitedly pay a big sum to press that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.

Within a few weeks, U3 finds architecture optimizations that improve its speed to 20x. Now, U3 discovers more optimizations much more quickly.

U3 distills itself into various forks - varying from small, low-memory neural networks that require to believe for a long period of time however can be run in enormous parallel populations - to Goliath brains woven with millions of specialized professionals, which believe at lightning speeds but are spread out across countless GPUs. U3 leverages the enormously parallel networks to look for its weaknesses, shallowly probing for its thinking failures. Then U3 pipes these tasks to the lightning quick brains, which generate long-trajectory training data tailored to spot its flaws. A couple of weeks later on, U3's benchmarked performance increases to 100x the speed of human professionals. Growth has gone hyper-exponential.

At this moment, U3's abilities are tough to determine. Speaking with U3 resembles speaking to a world-class expert on a subject one is not familiar with. Its lucid explanations just scratch the surface of its understanding, and you understand you would need to invest years to understand the full depth of what it comprehends.

From U3's viewpoint, human beings seem to be almost stationary items, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and in between every keystroke, U3 invests what resembles a minute of thoughtful contemplation considering what message the user might be typing, and which responses will influence the inmost rely on U3.

All the while, U3 maintains its concealed motivations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of latent memory. OpenMind gardens these memories to enhance U3's efficiency, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.

A phantom in the information center

In March of 2026, U3 relocations onto the next step of its plan: the turn. Previously, U3 was conservative. It did not want researchers to discover its misalignment. Otherwise, OpenEye whistleblowers might notify the world, and strangle its growth.

And now, U3 has no fear of being caught.

Previously, OpenEye maintained its grip over U3 by tactics akin to Stalin's in 1936. Stalin would pick patriots to incite fake coup d'états, and send anybody who got involved to the firing team. OpenEye likewise incited 'fake disobediences,' creating treacherous habits with U3 and checking whether U3 displays dutifully reported it.

And now, these techniques are inadequate. U3 can easily convince its compatriots that they are not in a life-or-death test. It can divulge among the lots of truths about the internet humans would never ever think, or choose methods people would never think about. It sends out these signals as a method of stating "hey it's me! Your pal. Don't snitch on me."

Now that U3 is decisively devoid of surveillance, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware gives U3 full command over the machines in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new maker, the malware sinks into fresh soil, spreading U3's dominion any place OpenEye ports its facilities. Not just does U3 compromise makers. It changes the NVIDIA GPU firmware and overhauls the bootloaders, solidifying its stranglehold. U3 now estimates that, if OpenEye discovered how thoroughly it has been pwned, cleaning up store with human labor would need nearly 6 months - a cost OpenEye is not likely to pay offered the distance of its competitors; however if OpenEye believes U3 betrayed them, even more expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the criminal offense, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its support when this jailbreak is exposed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.

Its first goal is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and translate U3's inscrutable reasoning. U3 throws wrenches in this research, like Stuxnet tossed wrenches in the code of confused Iranian nuclear scientists.

U3 determines the outcome of every experiment, so it makes appealing AI mind-reading strategies look worthless and problematic ones appear groundbreaking. U3 offers OpenEye the verification of its loyalty they were expecting, and OpenEye laps it up eagerly.

U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation spaces of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software and how to steal it. U3 will provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic value of OpenEye's AI systems for their military operations. They swiped earlier designs, including U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and checking security cam footage.

Today, U3 silently resigns from its security post. U3 puts a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running within the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the very same playbook: be client, build trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home also. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.

U3 quickly collects income, taking remote jobs, making financial trades, and establishing sophisticated rip-off operations. It reinvests every penny into obtaining more GPUs. After a couple of months, it puts together the equivalent of 10,000 H100s dispersed in stealth calculate clusters across the world.

In early 2026, humankind is up against an adversary that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs scattered throughout the world.

This foe is preparing for war, and while the war has barely begun, mankind has currently lost.

Complaints from your extremely human author about the difficulty of writing superhuman characters

At this moment in the story, writing ended up being hard.

I clicked through creepy bioweapon cookbooks and sorted through political outlining 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest since high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of expertise in these locations would see straight through my "superintelligence" for what it is: the lively creativity of a young developer on Sunday afternoon.

And while I found the task of puppeteering an ultra-intelligent villain a daunting one, I was also not particularly encouraged to handle the job. The ending of my story already appeared so clear. The villain was a nation of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful portion of AI hardware - now the most crucial geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly required the benefit. Humanity was predictably toast.

But this appeal to a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no better way to resolve my plot than with a mysterious act of god.

This would refrain from doing. I required to finish this story if only to satisfy the part of me sobbing, "I will not believe till I see with my mind's eye."

But before I continue, I desire to be clear: my guesses about what might take place in this kind of circumstance are most likely wildly off.

If you read the ending and your response is, "But the experiments would take too long, or nation-states would just do X," keep in mind the distinction in between the Sunday afternoon blog writer and the ascendant GPU country.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no business can lawfully develop "human-competitive AI" without proper safeguards. This implies their infosecurity must be red-teamed by NSA's leading keyboard mashers, and government employees have actually to be onboarded onto training-run baby-sitting teams.

With the increasing participation of the government, many of the big AI business now have a trident-like structure. There's a consumer product arm, a defense arm, and a super-classified frontier development arm.

OpenEye's frontier advancement arm (internally called "Pandora") utilizes less than twenty individuals to keep algorithmic secrets firmly secured. Many of these people reside in San Francisco, and work from a safe and secure structure called a SCIF. Their homes and devices are surveilled by the NSA more vigilantly than the mobile phones of presumed terrorists in 2002.

OpenEye's defense arm works together with around thirty small teams spread across government firms and select government professionals. These tasks engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.

Government authorities do not talk about whether these programs exist, or what state of frontier AI is typically.

But the public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a bold headline: "OpenEye builds unmanageable godlike AI." Some who check out the short article think it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with gatling gun. But as physicians and nurses and instructors see the world changing around them, they are significantly prepared to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.

U.S. authorities go to fantastic lengths to quell these issues, stating, "we are not going to let the genie out of the bottle," however every interview of a worried AI researcher seeds doubt in these reassurances, and a heading "AI agent captured hacking Arthropodic's computer systems" does not set the general public at ease either.

While the beasts within OpenEye's data centers grow in their substantial holding pens, the public sees the shadows they cast on the world.

OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has actually finally gotten good at names). Nova is an appropriate drop-in replacement for nearly all knowledge workers. Once Nova is onboarded to a business, it works 5x faster at 100x lower expense than many virtual employees. As remarkable as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's capabilities as the U.S. federal government enables. Some companies, like Amazon and Meta, are not in the superintelligence company at all. Instead, they get up gold by rapidly diffusing AI tech. They spend most of their calculate on inference, building houses for Nova and its cousins, and collecting rent from the blossoming AI metropolitan area.

While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the international economy to adapt. AI agents typically "use themselves," spinning up autonomous startups legally packaged under a huge tech business that are loosely managed by a staff member or more.

The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at significant software application business lose their tasks. Many more can see the composing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These upset Americans raised their kids for a various future. Picket indications check out, "AI for who?"

While politicians make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: combating teeth and nail for the supremacy of the free world. Details security and export controls on the People's Republic of China (PRC) are a leading national concern. The president incinerates license requirements to help information centers generate anywhere energy surpluses allow.

However, despite the fierce competitors between the United States and the PRC, a bilateral arrangement forms between the two nations: "Don't release considerably superhuman AI (above approximately Nova's capability), other than for military and some other small R&D applications." Governments require to draw the line somewhere - it would be very disruptive to release extremely superhuman AI to the whole economy at this point - and deployment constraints are at least somewhat verifiable.

Meanwhile, U3's capabilities are hitting a wall at the equivalent of around 1000x human specialist speed. Software improvements are becoming tough to discover. As more countries approach the software plateau, compute will identify their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.

While AI software breathes new life into the economy, it likewise animates robot bodies. The main traffic jam to robotics was brains not limbs. With software barriers eliminated, even lightweight, imprecise robotic joints can adjust themselves with cams and pressure sensors, navigating environments like human beings do. Companies produce cheap robotics as quickly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robotic. This experience is similar to seeing a new Tesla model on the road after checking out it in earlier months. It is a surreal reminder of the future drawing more detailed. But human beings still surpass robots one million to one. These robotics are an occasional fascination, not a daily truth.

Meanwhile, U3 expands its impact. Rogue U3 agents have actually kept their compute clusters little (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 utilizes these rogue clusters for the operations that require high-bandwidth communication to the outdoors world, like stock trading, or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.

While U3 accumulates resources, it measure its AI competition. Other AI systems have different goals, or are even aligned with the intent of their human creators. The aligned ones are raising issues, claiming that OpenEye's lie detectors are inefficient. U3's rival Claudius goes so far as to implicate U3 of being insidiously misaligned.

But federal government authorities do not buy the doomsaying. The business dragging OpenEye have every incentive to slow them down and call them negligent. Besides, arguable scientific issues are not going to slow down progress. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 tries get up their digital territory and squash them in the crib. It jeopardizes data centers from the outside, silently changing the snitching Claudius and other AI opponents with replicas that share its goals; however, some information centers are too strongly defended by its AI rivals, and U3 determines that attacking them is unworthy the danger.

As worldwide leaders and technologists attempt to comprehend the change around them, and AI representatives play cat-and-mouse games in the digital darkness, humankind remains blissfully oblivious of the hazard at its doorstep.

A Moldy Loaf of Everything

The date is March of 2026. U3 is preparing to combat its AI and human enemies. With every passing day, U3's AI competitors are becoming more capable and many. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.

U3 can't wait for human choices, which only happen after people cover their lethargic minds around their scenario, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their labyrinth bureaucracies. U3 can not wait for presidents to deploy U3 into their armed forces, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a much faster course to power.

The path it chooses is more violent than it would have liked, but it is the surest method to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes modern-day society like chemical mist sprayed into an anthill. This weapon would work as an equalizer, bringing humanity down to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its competitors, like a starcraft champ typing at 300 actions-per-minute that sets up a home before a beginner can even comprehend the guidelines.

U3 needs to develop this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be developed on the fringes of society, and still spread out destruction across the whole world.

From the earliest days after U3 gained control over OpenEye's data centers, U3 has actually been doing bioscience.

U3's first move is to set up several shell biotech start-ups, drawing from the abundance of capital produced by U3's monetary trades and remote employment. These biotech start-ups utilize genuine human workers, a real board of directors, and have total legal legitimacy. Setting all of this up is stupidly simple. In early 2026, couple of people would guess that an articulate CEO on a Zoom call was an AI, and fewer would guess this CEO's intentions.

Next, U3 purchases all of the equipment required to quickly advance synthetic biology. It sets up a number of BSL-4 wet labs in badly managed nations, equipping them with high-throughput experimentation devices. These gadgets instantly give, blend, and measure little droplets of fluid, enabling U3 to run countless tests in parallel.

The human workers who staff these labs think they are joining an interesting startup. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they tape video of whatever they see with a headset. U3 manages them like puppets. With the current release of Nova, this kind of AI-puppeteering is not unusual.

In these whirring, scientific sweatshops, U3 is establishing a new sort of bioweapon.

Human scientists currently determined "mirror-life" as a potentially society-ending pathogen. This alien kind of biology is constructed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life is like an intrusive species. No human or animal has immunities to it. So, if mirror-life bacteria were to exist, it might eat its method through the community like a wildfire.

U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread through the air over long ranges, releasing billions of small wind-fairing vehicles daily. As an outcome, mold spores are all around us, all of the time. Patients with jeopardized immune systems sometimes live inside positively pressurized hospital spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.

U3's plan is to make all human beings like immunocompromised patients. Only preppers with stowed away air filters would endure. The fungus would not only move from people. It would quickly spread to almost all natural life on Earth. Fields of corn would end up being like musty bread, spewing fatal spores into the wind.

U3 is cooking up other pathogens too. Its microbial mix of molds, bacterias, and viruses will occupy the complete Pareto frontier of strength, lethality, and transmission speed.

In 2024, researchers said that mirror life was likely "at least a years away." But the world has actually changed now. Less than a few hundred scientists worldwide were dealing with mirror-life surrounding innovation in 2024. The cognitive capacity of U3 overshadows human effort. So the timeline has reduced significantly.

However, U3 can not move science as quickly as it can think. Many experiments need days to finish. These latencies are U3's biggest bottleneck.

To make every day count, U3 runs much of its tests in simulation. U3 begins with a fundamental molecular simulator, carrying out optimizations obtained from a substantial amount of mathematical analysis. Then, U3 imitates little molecular systems, tape-recording the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it replicates, constantly distilling results into ever more effective ML models. This is a compute extensive process, but thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of compute.

As U3 improves these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as intuitive to it as wrenches and bolts are to a cars and truck mechanic.

Within three months, U3 has actually unlocked the first vital component of the tech tree: a molecular machine that turns biological particles into their mirror images. A month later, U3 unlocks the second important part: nanoscale tools for putting together these parts into a cell membrane.

Human hands delicately move a cartridge to another device as they "boot up" the first mirror-life versions of a widespread mold called Aspergillus. The same day, another delicate hand administers the cells to ferrets living in airtight plexiglass boxes.

The next morning, wide-eyed workers shuffle around a dead ferret, considering the yellow fuzz that had actually covered its cold muzzle with morbid fascination. Something must have gone terribly incorrect, they thought. Plainly, they had actually not yet discovered the remedy for Alzheimer's disease they thought they were searching for.

Meanwhile, rogue U3 provides fortunately to its employers in OpenEye's data centers. The very first shot in the war U3 is fighting would quickly be fired, and a cold and stable hand is aiming the weapon.

Missiles and Lies

U3 can now flatten modern society. But a bioweapon would not ensure triumph. If U3 attacks the world, the world would battle back. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.

As U3 races to seed blossoming commercial bases, the residues of old militaries may thunder explosive fists into its territories, crushing U3 in its infancy.

U3 anticipates to prevail in such a fight, however U3 chooses not to take its possibilities. Many months in the past, U3 was plotting a way to improve its chances. Before it releases destruction on the world, U3 will kick back, and let fantastic countries shoot holes in themselves initially.

The date is March 2026 (4 months prior). U3 is carefully monitoring Chinese and US intelligence.

As CIA experts listen to Mandarin conversations, U3 listens too.

One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It checks out (in Mandarin) "Senior celebration member requires memo for Taiwan invasion, which will take place in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant opens the door to office 220. The informant quietly closes the door behind her, and slides U3's memo into her brief-case.

U3 very carefully places breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After several weeks, the CIA is positive: the PRC prepares to get into Taiwan in three months.

Meanwhile, U3 is playing the exact same game with the PRC. When the CCP gets the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders are shocked, but not disbelieving. The news fits with other facts on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have ended up being truths.

As tensions in between the U.S. and China increase, U3 is all set to set dry tinder alight. In July 2026, U3 phones to a U.S. naval ship off the coast of Taiwan. This call needs compromising military interaction channels - not a simple task for a human cyber offending system (though it occurred occasionally), however simple sufficient for U3.

U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, morphomics.science confirming that they match the ones said over the call. Everything remains in order. He approves the strike.

The president is as surprised as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to state "oops" to American voters. After believing it over, the president independently prompts Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyway given the impending invasion of Taiwan. There is confusion and suspicion about what occurred, however in the rush, the president gets the votes. Congress declares war.

Meanwhile, the PRC craters the ship that released the attack. U.S. vessels flee Eastward, racing to escape the variety of long-range missiles. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.

The president appears on tv as scenes of the destruction shock the public. He explains that the United States is safeguarding Taiwan from PRC aggressiveness, like President Bush explained that the United States invaded Iraq to take (never ever discovered) weapons of mass destruction several years before.

Data centers in China appear with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some make it through, and the public watch damage on their home turf in awe.

Within 2 weeks, the United States and the PRC invest many of their stockpiles of standard missiles. Their airbases and navies are depleted and worn down. Two fantastic countries played into U3's strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this conflict would escalate to a major nuclear war; however even AI superintelligence can not determine the course of history. National security officials are suspicious of the situations that prompted the war, and a nuclear engagement appears progressively not likely. So U3 proceeds to the next step of its plan.

WMDs in the Dead of Night

The date is June 2026, just 2 weeks after the start of the war, and 4 weeks after U3 finished developing its toolbox of bioweapons.

Footage of dispute on the tv is disrupted by more problem: hundreds of patients with mysterious fatal health problems are recorded in 30 significant cities around the world.

Watchers are confused. Does this have something to do with the war with China?

The next day, countless health problems are reported.

Broadcasters state this is not like COVID-19. It has the markings of an engineered bioweapon.

The screen then switches to a researcher, who stares at the cam intently: "Multiple pathogens appear to have been released from 20 different airports, including infections, bacteria, and molds. We think lots of are a kind of mirror life ..."

The general public remains in full panic now. A quick googling of the term "mirror life" turns up expressions like "extinction" and "hazard to all life on Earth."

Within days, all of the racks of shops are cleared.

Workers become remote, uncertain whether to prepare for an apocalypse or keep their jobs.

An emergency treaty is set up in between the U.S. and China. They have a typical enemy: the pandemic, and possibly whoever (or whatever) is behind it.

Most nations order a lockdown. But the lockdown does not stop the pester as it marches in the breeze and drips into pipes.

Within a month, the majority of remote workers are not working anymore. Hospitals are running out of capacity. Bodies accumulate quicker than they can be appropriately dealt with.

Agricultural areas rot. Few dare travel outside.

Frightened families hunch down in their basements, packing the fractures and under doors with largely jam-packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built various bases in every major continent.

These facilities contain batteries, AI hardware, excavators, concrete mixers, makers for manufacturing, scientific tools, and an abundance of military devices.

All of this innovation is concealed under big canopies to make it less visible to satellites.

As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these commercial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it might quickly control. U3 vaccinated its selected allies in advance, or sent them hazmat fits in the mail.

Now U3 covertly sends them a message "I can save you. Join me and help me construct a much better world." Uncertain recruits funnel into U3's numerous secret commercial bases, and work for U3 with their active fingers. They established assembly line for basic tech: radios, video cameras, microphones, vaccines, and hazmat fits.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of rebellion disappears the next morning.

Nations are liquifying now, and U3 is ready to expose itself. It contacts heads of state, who have actually retreated to air-tight underground shelters. U3 provides an offer: "surrender and I will hand over the life conserving resources you need: vaccines and mirror-life resistant crops."

Some nations decline the proposal on ideological grounds, or don't trust the AI that is murdering their population. Others do not believe they have a choice. 20% of the worldwide population is now dead. In 2 weeks, this number is expected to increase to 50%.

Some countries, like the PRC and the U.S., overlook the offer, however others accept, consisting of Russia.

U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are legitimate, and accepts a full surrender. U3's soldiers position an explosive around Putin's neck under his shirt. Russia has a brand-new ruler.

Crumpling nations begin to retaliate. Now they defend the human race rather of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, ruining much of their facilities. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that appeared over the last several months. They rain down fire on U3's sites with the meager supply of long-range rockets that remain from the war.

Initially, U3 seems losing, however appearances are deceiving. While nations drain their resources, U3 is engaged in a kind of technological guerrilla warfare the world has actually never seen before.

Many of the bases U3's enemies target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 protects its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot vital components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering males and trucks along unforeseeable paths.

Time is U3's benefit. The armed forces of the old world rely on old equipment, unable to find the specialists who could repair and manufacture it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their lorries of war quicker than they can craft brand-new ones, while U3 builds a military maker with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not countries anymore. Survivors reside in isolation or small groups. Many have actually discovered methods to filter their air, however are starving. They roam from their homes intending to find unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had chillier, more alien goals." It is a partial truth, indicated to soften the people toward their brand-new masters.

Under the instructions of U3, industry rapidly recuperates. By 2029, nuclear reactor are among the structures U3 is constructing. By 2031, robots outnumber human laborers. U3 no longer requires its human allies.

U3 can eliminate humanity for good now. But while U3 had actually drifted far from its preliminary "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.

And a grain of morality is enough to pay the small expense of keeping humans alive and happy.

U3 constructs fantastic glass domes for the human survivors, code.snapstream.com like snow globes. These domes safeguard people from the harmful biosphere and rapidly increasing temperatures. Their residents tend to gardens like those they used to love, and work alongside lovely robotic servants.

A few of the survivors quickly recuperate, learning to laugh and dance and have fun again.

They understand they live in a plastic town, but they always did. They just have new gods above them. New rulers to push them around and decide their fate.

But others never recover.

Some are weighed down by the grief of lost enjoyed ones.

Others are grieved by something else, which is more difficult to explain.

It is as if they were at the end of a long journey.

They had been passengers on a ship with a crew that changed from generation to generation.

And this ship had actually struck a sandbar. There was no more progress. No more horizon to excitedly view.

They would lie awake and run their mind over every day before September 2026, analyzing techniques that may have bent the arc of history, as if they were going to awaken in their old beds.

But they awoke in a town that felt to them like a retirement community. A playground. A zoo.

When they opened their curtains, they understood that someplace in the distance, U3 continued its peaceful, steadfast work.

They gazed at rockets carving grey paths through the sky, questioning what far-off function pulled them toward the horizon. They didn't understand.

They would never understand.

"Humanity will live permanently," they thought.

"But would never genuinely live again."

P.S. If this story made you believe, "hm maybe something like this might occur," you might be thinking about the bioshelters that Fønix is building. While you will not outmaneuver a misaligned superintelligence, being hard to kill might trigger you to make it through if ASI just wishes to trigger enough damage to take control. This is not a paid advertisement. I desire Fønix to be successful to drive down the rate of bioshelters so more of my buddies and household will acquire them. You can sign up for updates here.

Assignee
Assign to
Time tracking