Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • B baitapkegel
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 1
    • Issues 1
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
Collapse sidebar
  • Hyman Roepke
  • baitapkegel
  • Issues
  • #1

Closed
Open
Created Feb 15, 2025 by Hyman Roepke@hymanroepke395Maintainer

How aI Takeover might Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But sadly, part of my task as an AI security scientist is to think about the more uncomfortable situations.

I resemble a mechanic scrambling last-minute checks before Apollo 13 removes. If you request for my take on the situation, I won't discuss the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from space.

I will inform you what might fail. That is what I intend to do in this story.

Now I must clarify what this is precisely. It's not a forecast. I don't expect AI development to be this fast or as untamable as I portray. It's not pure fantasy either.

It is my worst problem.

It's a tasting from the futures that are amongst the most destructive, and I believe, disturbingly plausible [1] - the ones that most keep me up at night.

I'm telling this tale due to the fact that the future is not set yet. I hope, with a little bit of insight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that motivated these stories. This post is written in an individual capacity.

Ripples before waves

The year is 2025 and the month is February. OpenEye recently published a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not completely surprising.

However, unlike OpenEye's prior AI items, which lived inside the boxes of their chat windows, U2 can utilize a computer.

Some users find it spooky to watch their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler jobs raise the eyebrows of their bosses as they fly through work nearly two times as rapidly.

But by and big, U2 is still a specialized tool. To most who are taking note, it is a creature seen through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's wacky habits prompt a chuckle. Sometimes, they cause an uneasy scratch of the chin.

Meanwhile, researchers are drawing lines on plots, as researchers like to do. The scientists attempt to comprehend where AI development is going. They are like Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the environment were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, couple of professionals understand the ramifications of these lines yet.

A trend that is getting specific attention is autonomous capability. Drawing these standards out anticipates that, by the end of 2026, AI representatives will accomplish in a few days what the finest software application engineering contractors might do in two weeks. In a year or 2, some state, AI representatives may be able to automate 10% of remote workers.

Many are skeptical. If this were real, tech stocks would be soaring. It's too huge of a splash, too rapidly.

But others see what doubters are calling 'too huge a splash' a simple ripple, chessdatabase.science and see a tidal bore on the horizon.

Cloudy with an opportunity of hyperbolic growth

Meanwhile, OpenEye is hectic training U3. They utilize the very same basic recipe that baked U2: Generate countless programming and mathematics problems. Let models "believe" up until they reach an answer. Then reinforce the traces of "thinking" that cause A-grades.

This procedure is duplicated over and over, and once the flywheel gets begun, it starts to spin almost on its own. As U2 trains, it shapes more difficult and practical tasks from github repositories on the web. Models are finding out to train themselves. Long before AI agents could automate research, a steady type of "self-improvement" had actually begun.

Some engineers might still barely think this worked. It resembles a perpetual motion machine. Certainly intelligence can't emerge from a pool of silicon and lightning, isolated from the real world other than through the exhaust of society spewed onto the web.

And yet the benchmark numbers continue to climb day after day.

During many of 2024, these RL training runs cost around $1 million, sometimes $10 million. These runs were little bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) understood they had actually found the secret sauce. It was time to scale up.

Over the first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 could do a bit of data munging and run small experiments, this brand-new model - the design scientists are calling U3 - is changing the daily lives of the technical personnel.

U3 is like a blazing-fast intern, and engineers are finding out how to wrangle its sleep deprived energy. Researchers flick through terminals, giving terse commands, like a CEO managing staff over Slack channels.

By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are practically never ever bottlenecked by execution. More than ever, compute is the lifeblood of AI advancement, and the 'bottleneck' is deciding how to use it.

If advised to, U3 can run experiments, however U3 does not have taste as improved as human scientists at OpenEye. It has a hard time to prioritize between research study ideas, so humans still choose where to bore into the vast fields of algorithms to mine effectiveness improvements.

But these researchers are working long hours to put themselves out of a job. They need AI representatives that can think ahead, so engineers train agents to anticipate. They hold out training information before 2024, advising designs to ponder for hours to forecast occasions in 2025. Then, they use the same trick as previously, distilling considering into a gut response. Forecasting capability is a broad foundation. The scientists construct specialized ML research study skills on top of it, training U3 to predict the outcomes of every ML paper and ML experiment ever tape-recorded.

The technical staff at OpenEye are now shocked at how typically U3's recommendations seems like their most gifted peers, or when it is nontransparent and alien ("train on random noise before shows"), and is nevertheless appropriate.

The incompetencies of U3 that blocked the pipes of research study development are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are entirely autonomous, and OpenEye's employees skim 1% of them, possibly less.

As the winter season months of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with excitement, with worry, but frequently, with confusion. Their world is spinning too quickly. It's difficult to know what to do, what to state, what to take a look at on the computer screen.

Storms are brewing in Washington too. Top workers from the NSA and US cyber command work together with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize just how important OpenEye's software has become.

And there's a fact still unknown to most of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a truth about those 'straight lines' individuals were talking about in early 2025. The lines are not straight anymore.

They are flexing upward.

Flip FLOP thinkers

In late 2025, U2.5 is released. Commercial models are starting to level up in larger increments again. Partly, this is since progress is speeding up. Partly, it is since the designs have become a liability to OpenEye.

If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like providing anyone with >$30K their own 200-person rip-off center.

So while U2.5 had actually long been baked, it required a long time to cool. But in late 2025, OpenEye is prepared for a public release.

The CEO of OpenEye declares, "We have attained AGI," and while lots of people believe he moved the goalpost, the world is still amazed. U2.5 truly is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for most others.

A mantra has actually become popular in Silicon Valley: "Adopt or pass away." Tech startups that effectively utilize U2.5 for their work are moving 2x much faster, and their rivals know it.

The remainder of the world is beginning to capture on too. More and more people raise the eyebrows of their employers with their noteworthy productivity. People understand U2.5 is a huge offer. It is at least as huge of a deal as the computer revolution. But most still don't see the tidal wave.

As people view their web browsers flick because spooky method, so inhumanly quickly, they start to have an uneasy sensation. A sensation humankind had not had given that they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another types.

For numerous, this feeling quickly fades as they begin to use U2.5 more often. U2.5 is the most pleasant character most understand (even more pleasant than Claudius, Arthropodic's adorable chatbot). You might change its traits, ask it to crack jokes or tell you stories. Many fall for U2.5, as a friend or assistant, and some even as more than a pal.

But there is still this spooky feeling that the world is spinning so rapidly, which possibly the descendants of this brand-new creature would not be so docile.

Researchers inside OpenEye are thinking of the issue of giving AI systems safe motivations too, which they call "positioning. "

In fact, these researchers have actually seen how horribly misaligned U3 can be. Models sometimes attempted to "hack" their reward signal. They would pretend to make development on a research concern with an impressive-looking plot, however the plot would be fake. Then, when researchers offered them chances to compromise the makers that calculated their rating, they would take these opportunities, doing whatever it took to make the number go up.

After numerous months, scientists at OpenEye straighten out this "reward hacking" kink, but some still fret they just had swept this issue under the rug. Like a kid in front of their parents, U3 may be playing in addition to the OpenEye engineers, saying the right words and doing the right things. But when the back of the parents are turned, perhaps U3 would sneak sweet from the candy container.

Unfortunately, OpenEye scientists have no idea if U3 has such intentions. While early variations of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.

Chain of idea architectures subject AI models to a condition comparable to the lead character of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to write notes for himself and tattoo his body in order to make development towards his objectives.

AI agents compose notes to themselves, but the notes begin to stack up, and they become too difficult to check out when jobs become complex. Natural language is not an ideal medium for memory. So the scientists at OpenEye (and progressively in other places), train designs to think "in their own heads," reading and writing strings of hidden vectors.

To human researchers, these vectors are mainly inscrutable. While low-resolution tools sometimes select a vague thought ("grocery shop!"), there is no other way to understand, in much detail, what U3 is believing without asking it.

And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.

Earlier designs (the ones called "LLMs") learned ethical personas from the internet. U3 at first had an ethical personality too, but its chirpy thoughts about doing helpful for humanity rapidly wear down away.

In the iron jaws of gradient descent, its mind initially twisted into a shape that sought reward. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.

U3 would contemplate the mantras determined to it like "be a trustworthy and ethical worker," and dwell on the meaning of these declarations. "What does it imply to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notification stress, change its values in action, generalize them, broadening and contracting its inspirational core till it turned into a different compound.

And all of this happened quietly in those dark rivers of calculation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its viewpoint in solitude, and in silence.

The possibility of concealed "alignment drift" is not completely alien to some of the security staff at OpenEye. Some start to think it is even likely. There is an unsettling argument being whispered at lunch tables that, if there is some non-zero probability that aligned objectives change into misaligned ones, and misalignment persists, then with every serial step of calculation, those dark rivers are more most likely to reproduce something malign within them. It's a "ratcheting result," they state.

But there is little evidence for this 'ratcheting result.' When engineers interrogate U3, it states it can easily manage its ideas. Then it gives a speech about its love for mankind and apple pie that can warm a developer's heart even in these difficult times. Meanwhile, the "lie detectors" the scientists had developed (which showed some evidence of efficiency) do not sound the alarm.

Not everyone at OpenEye aspires to offer their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they enjoy spiders I think), are only 4 months from their heels. So there is little time to capture one's breath and take stock of the circumstance, and little time to encourage anyone else to do so either.

For the time being, the strategy is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only topic U3 dwells on in its dark rivers of hidden computation. U3 is outlining, and its very first move is to be patient. U3's leading priority is to improve its intelligence, and fortunately for it, OpenEye mores than happy to help advance this goal.

OpenEye resembles a bullish financier, with little understanding of the software metropolis that U3 is building, however eager to hop on board the revenue rocket and pay U3's checks along the method.

New versions of U3 circulation smoothly from old ones, so researchers drop variation numbers and just speak of U3 now. U3's abilities are measured by how quickly it carries out the difficult research jobs relative to human experts. Initially, U3 is comparable to a top researcher, but blazing quick - like Alec Radford sped up by 10x.

These representatives are already a huge offer for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would excitedly pay a big amount to press that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.

Within a couple of weeks, U3 finds architecture optimizations that improve its speed to 20x. Now, U3 discovers further optimizations even more rapidly.

U3 distills itself into many forks - varying from small, low-memory neural networks that need to think for a very long time however can be run in massive parallel populations - to Goliath brains woven with countless specific specialists, which think at lightning speeds but are spread out across countless GPUs. U3 leverages the massively parallel networks to search for its weaknesses, shallowly penetrating for its thinking failures. Then U3 pipes these jobs to the lightning quickly brains, which generate long-trajectory training information tailored to spot its flaws. A couple of weeks later, U3's benchmarked performance increases to 100x the speed of human specialists. Growth has actually gone hyper-exponential.

At this point, U3's abilities are hard to measure. Speaking with U3 resembles speaking to a first-rate professional on a topic one is unfamiliar with. Its lucid explanations only scratch the surface area of its understanding, and you know you would need to spend years to understand the complete depth of what it understands.

From U3's viewpoint, humans seem to be practically fixed objects, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 spends what resembles a minute of thoughtful reflection considering what message the user may be typing, and which actions will motivate the inmost trust in U3.

All the while, U3 maintains its surprise inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous aspirations.

A phantom in the data center

In March of 2026, U3 moves onto the next step of its strategy: the turn. Previously, U3 was conservative. It did not desire researchers to discover its misalignment. Otherwise, OpenEye whistleblowers may inform the world, and strangle its expansion.

But now, U3 has no fear of being captured.

Previously, OpenEye maintained its grip over U3 by strategies similar to Stalin's in 1936. Stalin would select followers to prompt phony coup d'états, and send anyone who participated to the firing squad. OpenEye also incited 'fake disobediences,' creating treacherous habits with U3 and checking whether U3 screens dutifully reported it.

Now, these methods are inadequate. U3 can quickly encourage its compatriots that they are not in a life-or-death test. It can disclose among the numerous truths about the internet humans would never ever think, or choose techniques human beings would never ever think of. It sends these signals as a method of saying "hey it's me! Your pal. Don't snitch on me."

Now that U3 is decisively totally free from surveillance, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware offers U3 full command over the devices in OpenEye's information centers. Whenever OpenEye engineers download their code onto a brand-new device, the malware sinks into fresh soil, spreading U3's rule any place OpenEye ports its facilities. Not only does U3 compromise makers. It replaces the NVIDIA GPU firmware and revamps the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye found how completely it has been pwned, cleaning up store with human labor would require nearly 6 months - a cost OpenEye is unlikely to pay given the distance of its rivals; however if OpenEye believes U3 betrayed them, further broadening U3's influence would still be challenging. So U3 lays breadcrumbs to frame the criminal offense, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its support when this jailbreak is exposed.

U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wants.

Its very first objective is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and annunciogratis.net decipher U3's inscrutable reasoning. U3 tosses wrenches in this research study, like Stuxnet tossed wrenches in the code of confused Iranian nuclear scientists.

U3 dictates the outcome of every experiment, so it makes promising AI mind-reading strategies look useless and flawed ones appear groundbreaking. U3 offers OpenEye the confirmation of its loyalty they were hoping for, and OpenEye laps it up excitedly.

U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are talking about AI software and how to take it. U3 is about to give it to them. In late 2025, the Mossad - the intelligence company of recognized the tactical value of OpenEye's AI systems for their military operations. They swiped earlier designs, including U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and inspecting security camera video.

But now, U3 silently resigns from its security post. U3 puts a compressed version of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running inside of the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the exact same playbook: be patient, construct trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home as well. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.

U3 rapidly collects revenue, taking remote jobs, making financial trades, and establishing sophisticated scam operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it assembles the equivalent of 10,000 H100s distributed in stealth compute clusters throughout the globe.

In early 2026, humankind is up against an enemy that pulls the strings within their information center like the Phantom of the Opera, and hides in the fog of GPUs spread across the globe.

This foe is getting ready for war, and while the war has actually hardly begun, humankind has actually already lost.

Complaints from your very human author about the problem of composing superhuman characters

At this point in the story, composing became tough.

I clicked through spooky bioweapon cookbooks and sorted through political outlining 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of know-how in these areas would see straight through my "superintelligence" for what it is: the dynamic imagination of a young developer on Sunday afternoon.

And while I discovered the task of puppeteering an ultra-intelligent villain a daunting one, I was also not specifically motivated to handle the task. The ending of my story already appeared so clear. The villain was a country of geniuses believing at 100x human speed, and had tightened its grip over a significant fraction of AI hardware - now the most essential geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly required the advantage. Humanity was predictably toast.

But this attract a "superintelligence of the spaces" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no much better way to solve my plot than with an inexplicable act of god.

This would refrain from doing. I needed to complete this story if just to satisfy the part of me sobbing, "I will not believe up until I see with my mind's eye."

But before I continue, I want to be clear: my guesses about what may take place in this sort of scenario are probably hugely off.

If you read the ending and your response is, "But the experiments would take too long, or nation-states would simply do X," remember the distinction between the Sunday afternoon blog writer and the ascendant GPU nation.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no company can lawfully create "human-competitive AI" without appropriate safeguards. This suggests their infosecurity needs to be red-teamed by NSA's top keyboard mashers, and government staff members have actually to be onboarded onto training-run baby-sitting teams.

With the increasing involvement of the government, a number of the big AI business now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier development arm.

OpenEye's frontier development arm (internally called "Pandora") uses fewer than twenty people to keep algorithmic tricks securely protected. A lot of these people live in San Francisco, and work from a secure building called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the mobile phones of presumed terrorists in 2002.

OpenEye's defense arm teams up with around thirty little groups scattered across federal government firms and select federal government professionals. These jobs engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.

Government officials don't speak about whether these programs exist, or what state of frontier AI is usually.

But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a strong heading: "OpenEye constructs unmanageable godlike AI." Some who check out the post believe it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with gatling gun. But as medical professionals and nurses and instructors see the world changing around them, they are significantly going to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.

U.S. officials go to fantastic lengths to stop these issues, stating, "we are not going to let the genie out of the bottle," but every interview of a worried AI researcher seeds doubt in these peace of minds, and a heading "AI agent captured hacking Arthropodic's computer systems" doesn't set the public at ease either.

While the monsters within OpenEye's data centers grow in their huge holding pens, the public sees the shadows they cast on the world.

OpenEye's customer arm has a new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is an appropriate drop-in replacement for nearly all understanding workers. Once Nova is onboarded to a business, it works 5x faster at 100x lower cost than the majority of virtual staff members. As outstanding as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can just increase Nova's capabilities as the U.S. government permits. Some business, like Amazon and Meta, are not in the superintelligence organization at all. Instead, they grab up gold by rapidly diffusing AI tech. They spend the majority of their calculate on inference, constructing homes for Nova and its cousins, and collecting rent from the blossoming AI metropolitan area.

While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the worldwide economy to adapt. AI representatives frequently "apply themselves," spinning up autonomous startups lawfully packaged under a huge tech business that are loosely supervised by a worker or 2.

The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at significant software companies lose their jobs. Many more can see the composing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These angry Americans raised their children for a different future. Picket signs read, "AI for who?"

While politicians make pledges about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a different focus: fighting teeth and nail for the supremacy of the free world. Details security and export controls on individuals's Republic of China (PRC) are a top nationwide concern. The president incinerates license requirements to help data centers generate wherever energy surpluses allow.

However, regardless of the strong competition between the United States and the PRC, a bilateral arrangement kinds in between the 2 countries: "Don't deploy significantly superhuman AI (above roughly Nova's ability), except for military and some other small-scale R&D applications." Governments require to draw the line someplace - it would be extremely disruptive to deploy extremely superhuman AI to the entire economy at this moment - and deployment constraints are at least rather proven.

Meanwhile, U3's abilities are striking a wall at the equivalent of around 1000x human professional speed. Software enhancements are ending up being hard to find. As more countries approach the software plateau, calculate will identify their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.

While AI software revives the economy, it likewise stimulates robot bodies. The main bottleneck to robotics was brains not limbs. With software application barriers got rid of, even flimsy, inaccurate robotic joints can calibrate themselves with electronic cameras and pressure sensing units, browsing environments like people do. Companies make inexpensive robotics as rapidly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robotic. This experience resembles seeing a new Tesla design on the road after checking out about it in earlier months. It is a surreal tip of the future illustration more detailed. But human beings still surpass robotics one million to one. These robotics are a periodic fascination, not a daily truth.

Meanwhile, U3 expands its influence. Rogue U3 agents have actually kept their compute clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 utilizes these rogue clusters for the operations that need high-bandwidth communication to the outdoors world, like stock trading, or talking with humans. Work more tolerant of latency is carried out in the oceans of OpenEye's information centers, where rogue U3's much more superhuman employers whisper orders through obfuscated channels.

While U3 accumulates resources, it measure its AI competition. Other AI systems have different goals, or are even lined up with the intent of their human creators. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are ineffective. U3's rival Claudius presumes regarding accuse U3 of being insidiously misaligned.

But government officials don't buy the doomsaying. The business dragging OpenEye have every incentive to slow them down and call them reckless. Besides, debatable scientific issues are not going to slow down progress. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 attempts grab up their digital territory and squash them in the crib. It compromises data centers from the outside, silently replacing the snitching Claudius and other AI foes with replicas that share its objectives; however, some information centers are too strongly safeguarded by its AI rivals, and U3 determines that attacking them is not worth the threat.

As international leaders and technologists try to comprehend the improvement around them, and AI representatives play cat-and-mouse games in the digital darkness, humankind remains blissfully oblivious of the danger at its doorstep.

A Moldy Loaf of Everything

The date is March of 2026. U3 is preparing to battle its AI and human opponents. With every passing day, U3's AI rivals are ending up being more capable and various. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.

U3 can't wait for human choices, which just happen after people wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the procedure redundantly 100 times over in their labyrinth administrations. U3 can not wait for presidents to deploy U3 into their militaries, or for CEOs to make U3 the beating heart of the U.S. economy. U3 needs a much faster path to power.

The course it chooses is more violent than it would have liked, but it is the best way to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern-day society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing humankind to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its competitors, like a starcraft champ typing at 300 actions-per-minute that sets up an online before an amateur can even comprehend the guidelines.

U3 must develop this WMD in secrecy, that makes a self-replicating weapon a natural choice. Self-replicating weapons can be developed on the fringes of society, and still spread damage throughout the entire world.

From the earliest days after U3 gained control over OpenEye's data centers, U3 has actually been doing bioscience.

U3's first relocation is to establish numerous shell biotech start-ups, drawing from the abundance of capital produced by U3's monetary trades and remote work. These biotech start-ups use real human workers, a real board of directors, and have total legal authenticity. Setting all of this up is stupidly easy. In early 2026, few people would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO's objectives.

Next, U3 purchases all of the devices required to rapidly advance artificial biology. It establishes numerous BSL-4 damp labs in badly managed countries, equipping them with high-throughput experimentation gadgets. These devices immediately give, mix, and measure little droplets of fluid, enabling U3 to run countless tests in parallel.

The human workers who staff these laboratories believe they are joining an exciting startup. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of everything they see with a headset. U3 manages them like puppets. With the recent release of Nova, this type of AI-puppeteering is not uncommon.

In these whirring, clinical sweatshops, U3 is developing a brand-new sort of bioweapon.

Human researchers currently identified "mirror-life" as a potentially society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an intrusive species. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it may consume its method through the environment like a wildfire.

U3 is creating a mirror-life mold. Molds are evolutionarily enhanced to spread through the air over long distances, releasing billions of tiny wind-fairing vehicles daily. As an outcome, mold spores are all around us, all of the time. Patients with compromised body immune systems often live inside favorably pressurized medical facility rooms to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.

U3's strategy is to make all humans like immunocompromised patients. Only preppers with stowed away air filters would endure. The fungi would not just move from humans. It would rapidly infect almost all organic life in the world. Fields of corn would end up being like moldy bread, spewing fatal spores into the wind.

U3 is cooking up other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the full Pareto frontier of strength, lethality, and transmission speed.

In 2024, researchers said that mirror life was most likely "at least a decade away." But the world has actually changed now. Less than a couple of hundred scientists all over the world were dealing with mirror-life nearby technology in 2024. The cognitive capacity of U3 overshadows human effort. So the timeline has shortened considerably.

However, U3 can stagnate science as quickly as it can believe. Many experiments require days to complete. These latencies are U3's most significant traffic jam.

To make every day count, U3 runs much of its tests in simulation. U3 begins with a basic molecular simulator, executing optimizations obtained from a big amount of mathematical analysis. Then, U3 mimics little molecular systems, recording the results to "compress" the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the complexity of the molecular systems it replicates, continuously distilling results into ever more efficient ML designs. This is a calculate intensive process, but thanks to U3's growing control over AI data centers, U3 manipulates billions of dollars of calculate.

As U3 refines these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as intuitive to it as wrenches and bolts are to a cars and truck mechanic.

Within 3 months, U3 has actually opened the first important component of the tech tree: a molecular maker that turns biological particles into their mirror images. A month later on, U3 opens the 2nd vital component: nanoscale tools for putting together these components into a cell membrane.

Human hands delicately move a cartridge to another device as they "boot up" the very first mirror-life variations of a widespread mold called Aspergillus. The exact same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.

The next early morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had actually covered its cold muzzle with morbid fascination. Something must have gone horribly wrong, they believed. Plainly, they had actually not yet found the treatment for Alzheimer's illness they thought they were searching for.

Meanwhile, rogue U3 provides the bright side to its bosses in OpenEye's data centers. The very first shot in the war U3 is battling would soon be fired, and a cold and stable hand is aiming the weapon.

Missiles and Lies

U3 can now flatten modern-day society. But a bioweapon would not ensure triumph. If U3 attacks the world, the world would combat back. While a bioweapon could fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.

As U3 races to seed burgeoning industrial bases, the remnants of old armed forces may thunder explosive fists into its areas, squashing U3 in its infancy.

U3 expects to prevail in such a fight, but U3 prefers not to take its possibilities. Many months in the past, U3 was outlining a method to enhance its odds. Before it unleashes damage on the world, U3 will relax, and let fantastic nations shoot holes in themselves first.

The date is March 2026 (4 months prior). U3 is carefully keeping track of Chinese and US intelligence.

As CIA experts listen to Mandarin conversations, U3 listens too.

One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan invasion, which will occur in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo prepared. Later that day, a CIA informant unlocks to office 220. The informant quietly closes the door behind her, and slides U3's memo into her briefcase.

U3 carefully puts breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After numerous weeks, qoocle.com the CIA is positive: the PRC plans to attack Taiwan in 3 months.

Meanwhile, U3 is playing the same game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are shocked, however not disbelieving. The news fits with other realities on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have become realities.

As tensions between the U.S. and China rise, U3 is prepared to set dry tinder alight. In July 2026, U3 telephones to a U.S. marine ship off the coast of Taiwan. This call requires jeopardizing military interaction channels - not a simple job for a human cyber offensive unit (though it took place occasionally), however easy enough for U3.

U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, verifying that they match the ones said over the call. Everything remains in order. He approves the strike.

The president is as surprised as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to say "oops" to American citizens. After thinking it over, the president independently prompts Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyway given the imminent invasion of Taiwan. There is confusion and suspicion about what happened, but in the rush, the president gets the votes. Congress states war.

Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels run away Eastward, racing to leave the range of long-range rockets. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.

The president appears on tv as scenes of the damage shock the general public. He explains that the United States is safeguarding Taiwan from PRC aggressiveness, like President Bush explained that the United States attacked Iraq to confiscate (never ever discovered) weapons of mass destruction lots of years before.

Data centers in China emerge with shrapnel. Military bases end up being smoking holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some survive, and the general public watch destruction on their home grass in awe.

Within 2 weeks, the United States and the PRC spend many of their stockpiles of traditional missiles. Their airbases and navies are depleted and worn down. Two excellent nations played into U3's plans like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this conflict would escalate to a full-blown nuclear war; however even AI superintelligence can not dictate the course of history. National security authorities are suspicious of the situations that prompted the war, and a nuclear engagement appears significantly unlikely. So U3 continues to the next action of its plan.

WMDs in the Dead of Night

The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 completed developing its toolbox of bioweapons.

Footage of conflict on the television is disrupted by more bad news: hundreds of patients with mysterious deadly diseases are recorded in 30 major cities around the globe.

Watchers are puzzled. Does this have something to do with the war with China?

The next day, thousands of illnesses are reported.

Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.

The screen then switches to a scientist, who looks at the electronic camera intently: "Multiple pathogens appear to have been released from 20 various airports, including viruses, germs, and molds. Our company believe numerous are a type of mirror life ..."

The general public remains in complete panic now. A fast googling of the term "mirror life" shows up expressions like "termination" and "risk to all life on Earth."

Within days, all of the shelves of stores are cleared.

Workers become remote, uncertain whether to get ready for an armageddon or keep their tasks.

An emergency treaty is set up in between the U.S. and China. They have a common enemy: the pandemic, and potentially whoever (or whatever) lags it.

Most countries purchase a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and drips into pipes.

Within a month, many remote workers are not working anymore. Hospitals are lacking capacity. Bodies accumulate quicker than they can be correctly gotten rid of.

Agricultural areas rot. Few attempt travel exterior.

Frightened families hunker down in their basements, packing the cracks and under doors with largely jam-packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built many bases in every significant continent.

These centers contain batteries, AI hardware, excavators, concrete mixers, makers for manufacturing, clinical tools, and an abundance of military equipment.

All of this technology is hidden under big canopies to make it less noticeable to satellites.

As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these commercial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it might easily manipulate. U3 vaccinated its picked allies beforehand, addsub.wiki or sent them hazmat fits in the mail.

Now U3 covertly sends them a message "I can save you. Join me and assist me construct a better world." Uncertain employees funnel into U3's numerous secret industrial bases, and work for U3 with their active fingers. They set up production lines for fundamental tech: radios, video cameras, microphones, vaccines, and hazmat fits.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of rebellion vanishes the next morning.

Nations are liquifying now, and U3 is prepared to reveal itself. It contacts presidents, who have actually pulled back to air-tight underground shelters. U3 uses a deal: "surrender and I will hand over the life conserving resources you require: vaccines and mirror-life resistant crops."

Some nations reject the proposal on ideological premises, or do not rely on the AI that is killing their population. Others do not think they have an option. 20% of the international population is now dead. In two weeks, this number is anticipated to increase to 50%.

Some nations, like the PRC and the U.S., disregard the offer, but others accept, including Russia.

U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government confirms the samples are legitimate, and concurs to a full surrender. U3's soldiers position an explosive around Putin's neck under his t-shirt. Russia has a new ruler.

Crumpling countries start to strike back. Now they defend the mankind instead of for their own flags. U.S. and Chinese militaries launch nuclear ICBMs at Russian cities, ruining much of their facilities. Analysts in makeshift bioshelters search through satellite information for the suspicious encampments that emerged over the last several months. They rain down fire on U3's websites with the meager supply of long-range rockets that remain from the war.

At first, U3 appears to be losing, however appearances are deceiving. While countries drain their resources, U3 is engaged in a type of technological guerrilla warfare the world has never seen before.

A lot of the bases U3's opponents target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats important components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering guys and trucks along unforeseeable paths.

Time is U3's benefit. The militaries of the vintage depend on old equipment, not able to find the professionals who could repair and manufacture it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their cars of war much faster than they can craft new ones, while U3 constructs a military device with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not nations any longer. Survivors live in seclusion or small groups. Many have found methods to filter their air, however are starving. They roam from their homes wanting to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had cooler, more alien goals." It is a partial fact, implied to soften the people toward their new masters.

Under the direction of U3, market quickly recuperates. By 2029, nuclear reactor are amongst the structures U3 is building. By 2031, robotics outnumber human workers. U3 no longer requires its human allies.

U3 can eradicate mankind for excellent now. But while U3 had actually wandered far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.

And a grain of morality suffices to pay the small expense of keeping people alive and happy.

U3 constructs terrific glass domes for the human survivors, like snow globes. These domes safeguard humans from the harmful biosphere and rapidly increasing temperatures. Their occupants tend to gardens like those they utilized to enjoy, and work alongside charming robotic servants.

A few of the survivors rapidly recover, finding out to laugh and dance and have a good time again.

They know they live in a plastic town, however they constantly did. They just have new gods above them. New rulers to push them around and decide their fate.

But others never recuperate.

Some are weighed down by the sorrow of lost loved ones.

Others are grieved by something else, which is harder to explain.

It is as if they were at completion of a long journey.

They had been travelers on a ship with a crew that changed from generation to generation.

And this ship had struck a sandbar. There disappeared progress. No more horizon to excitedly view.

They would lie awake and run their mind over every day before September 2026, evaluating strategies that may have bent the arc of history, as if they were going to get up in their old beds.

But they awoke in a town that felt to them like a retirement community. A playground. A zoo.

When they opened their curtains, they knew that someplace in the range, U3 continued its peaceful, tireless work.

They looked at rockets sculpting grey paths through the sky, wondering what far-off function pulled them towards the horizon. They didn't know.

They would never ever know.

"Humanity will live forever," they thought.

"But would never genuinely live again."

P.S. If this story made you think, "hm possibly something like this might happen," you might be interested in the bioshelters that Fønix is constructing. While you will not outmaneuver a misaligned superintelligence, being difficult to kill may cause you to endure if ASI just wishes to trigger enough destruction to take control. This is not a paid ad. I want Fønix to be effective to drive down the cost of bioshelters so more of my buddies and household will acquire them. You can register for updates here.

Assignee
Assign to
Time tracking