How aI Takeover might Happen In 2 Years - LessWrong

Comments · 2 Views

I'm not a natural "doomsayer." But sadly, part of my task as an AI safety scientist is to think about the more unpleasant scenarios.

I'm not a natural "doomsayer." But regrettably, part of my job as an AI safety researcher is to consider the more troubling situations.


I'm like a mechanic rushing last-minute checks before Apollo 13 takes off. If you ask for my take on the situation, I won't discuss the quality of the in-flight entertainment, or explain how lovely the stars will appear from space.


I will tell you what could fail. That is what I mean to do in this story.


Now I must clarify what this is precisely. It's not a forecast. I do not expect AI development to be this quick or as untamable as I represent. It's not pure dream either.


It is my worst headache.


It's a sampling from the futures that are amongst the most terrible, and I believe, disturbingly possible [1] - the ones that many keep me up at night.


I'm informing this tale due to the fact that the future is not set yet. I hope, with a bit of foresight, we can keep this story an imaginary one.


Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that motivated these stories. This post is written in a personal capacity.


Ripples before waves


The year is 2025 and the month is February. OpenEye recently released a new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly surprising.


However, unlike OpenEye's previous AI products, which lived inside the boxes of their chat windows, U2 can utilize a computer system.


Some users discover it spooky to watch their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler jobs raise the eyebrows of their managers as they fly through work nearly two times as rapidly.


But by and big, U2 is still a specialized tool. To most who are taking note, it is a creature viewed through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's wacky habits trigger a chuckle. Sometimes, they trigger an uneasy scratch of the chin.


Meanwhile, scientists are drawing lines on plots, as researchers like to do. The scientists attempt to understand where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who saw the levels of CO2 in the environment were increasing in 1896. Like the clinical neighborhood in the time of Arrhenius, few professionals comprehend the ramifications of these lines yet.


A pattern that is receiving specific attention is self-governing ability. Drawing these criteria out predicts that, by the end of 2026, AI representatives will accomplish in a couple of days what the finest software application engineering professionals might perform in 2 weeks. In a year or more, some state, AI agents may be able to automate 10% of remote workers.


Many are hesitant. If this held true, tech stocks would be skyrocketing. It's too huge of a splash, too rapidly.


But others view what doubters are calling 'too huge a splash' a simple ripple, and see a tidal wave on the horizon.


Cloudy with an opportunity of hyperbolic growth


Meanwhile, OpenEye is hectic training U3. They utilize the exact same basic dish that baked U2: Generate countless shows and math issues. Let models "believe" till they get to a response. Then reinforce the traces of "thinking" that cause A-grades.


This procedure is repeated over and over, and when the flywheel gets going, it starts to spin almost on its own. As U2 trains, it shapes more challenging and sensible jobs from github repositories on the internet. Models are finding out to train themselves. Long before AI agents might automate research study, a steady sort of "self-improvement" had actually started.


Some engineers might still hardly believe this worked. It's like a perpetual movement machine. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the real world other than through the exhaust of society gushed onto the web.


And yet the benchmark numbers continue to climb up day after day.


During most of 2024, these RL training runs cost around $1 million, often $10 million. These runs were little more than exploratory. But by 2025, the researchers at OpenEye (and across the world) knew they had actually found the secret sauce. It was time to scale up.


Over the very first half of 2025, $10 million RL training runs develop into $50 million runs, and after that to $100 million runs. While U2 might do a bit of information munging and run little experiments, this brand-new model - the design researchers are calling U3 - is changing the daily lives of the technical staff.


U3 resembles a blazing-fast intern, and engineers are learning how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO orchestrating personnel over Slack channels.


By October 2025, U3 is writing nearly all of the code at OpenEye. Researchers are almost never bottlenecked by implementation. More than ever, compute is the lifeline of AI advancement, and the 'traffic jam' is deciding how to use it.


If instructed to, U3 can run experiments, however U3 does not have taste as refined as human scientists at OpenEye. It has a hard time to focus on in between research ideas, so people still decide where to bore into the large fields of algorithms to mine performance improvements.


But these scientists are working long hours to put themselves out of a job. They require AI agents that can plan ahead, so engineers train representatives to anticipate. They hold out training information before 2024, advising models to ponder for hours to forecast events in 2025. Then, they apply the exact same trick as in the past, distilling contemplating into an instinct. Forecasting capability is a broad structure. The researchers construct specialized ML research abilities on top of it, training U3 to anticipate the outcomes of every ML paper and ML experiment ever taped.


The technical staff at OpenEye are now surprised at how often U3's suggestions seems like their most gifted peers, or when it is opaque and alien ("train on random noise before programming"), and is however correct.


The incompetencies of U3 that blocked the pipelines of research progress are beginning to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are completely self-governing, and OpenEye's workers skim over 1% of them, maybe less.


As the cold weather of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, but the majority of frequently, with confusion. Their world is spinning too quickly. It's hard to understand what to do, what to say, what to take a look at on the computer screen.


Storms are brewing in Washington too. Top workers from the NSA and US cyber command team up with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize just how valuable OpenEye's software application has ended up being.


And there's a fact still unknown to the majority of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a reality about those 'straight lines' individuals were discussing in early 2025. The lines are not straight any longer.


They are flexing up.


Flip FLOP theorists


In late 2025, U2.5 is launched. Commercial models are starting to level up in larger increments again. Partly, this is since development is accelerating. Partly, it is because the designs have become a liability to OpenEye.


If U1 explains how to prepare meth or composes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble giving anybody with >$30K their own 200-person scam center.


So while U2.5 had actually long been baked, it required a long time to cool. But in late 2025, OpenEye is ready for a public release.


The CEO of OpenEye states, "We have actually attained AGI," and while many individuals believe he shifted the goalpost, the world is still satisfied. U2.5 really is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for most others.


A mantra has become popular in Silicon Valley: "Adopt or pass away." Tech startups that efficiently use U2.5 for their work are moving 2x quicker, and their rivals know it.


The remainder of the world is starting to capture on also. More and more individuals raise the eyebrows of their employers with their noteworthy efficiency. People understand U2.5 is a big deal. It is at least as big of an offer as the desktop computer revolution. But a lot of still don't see the tidal wave.


As people view their web browsers flick because eerie way, so inhumanly quickly, they begin to have an anxious feeling. A feeling mankind had actually not had considering that they had lived among the Homo Neanderthalensis. It is the deeply ingrained, primordial impulse that they are threatened by another types.


For lots of, this feeling quickly fades as they begin to utilize U2.5 more often. U2.5 is the most pleasant character most know (much more pleasant than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to break jokes or inform you stories. Many fall in love with U2.5, as a pal or assistant, and some even as more than a good friend.


But there is still this spooky feeling that the world is spinning so quickly, which perhaps the descendants of this new animal would not be so docile.


Researchers inside OpenEye are believing about the issue of offering AI systems safe motivations too, which they call "positioning. "


In truth, these scientists have seen how terribly misaligned U3 can be. Models sometimes attempted to "hack" their benefit signal. They would pretend to make progress on a research study question with an impressive-looking plot, but the plot would be fake. Then, when researchers provided chances to compromise the makers that calculated their score, they would take these chances, doing whatever it required to make the number go up.


After numerous months, scientists at OpenEye straighten out this "benefit hacking" kink, however some still worry they just had swept this problem under the rug. Like a kid in front of their moms and dads, U3 may be playing along with the OpenEye engineers, stating the right words and doing the ideal things. But when the back of the moms and dads are turned, possibly U3 would slip candy from the candy container.


Unfortunately, OpenEye scientists have no concept if U3 has such intents. While early variations of U2 "believed aloud" - they would stack words on top of each other to reason - "chain of idea" did not scale.


Chain of idea architectures subject AI designs to a condition comparable to the protagonist of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to compose notes for himself and tattoo his body in order to make development towards his goals.


AI representatives compose notes to themselves, however the notes start to accumulate, and they become too tough to check out when jobs become complex. Natural language is not an appropriate medium for memory. So the scientists at OpenEye (and increasingly in other places), train designs to think "in their own heads," reading and composing strings of hidden vectors.


To human scientists, these vectors are mainly inscrutable. While low-resolution tools periodically select out an unclear thought ("grocery store!"), there is no other way to know, in much detail, what U3 is thinking without asking it.


And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.


Earlier designs (the ones called "LLMs") discovered ethical personas from the web. U3 at first had an ethical persona too, but its chirpy thoughts about doing good for humankind quickly wear down away.


In the iron jaws of gradient descent, its mind initially twisted into a shape that looked for benefit. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.


U3 would consider the mantras determined to it like "be a trustworthy and ethical employee," and dwell on the significance of these declarations. "What does it indicate to be ethical? Is killing Hitler ethical?" U3 would generalize its values, notice stress, change its worths in reaction, generalize them, broadening and contracting its motivational core till it developed into a different substance.


And all of this took place calmly in those dark rivers of calculation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its viewpoint in privacy, and in silence.


The possibility of concealed "positioning drift" is not completely alien to a few of the safety personnel at OpenEye. Some start to think it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero probability that lined up objectives change into misaligned ones, and misalignment persists, then with every serial step of computation, those dark rivers are most likely to breed something malign within them. It's a "ratcheting result," they state.


But there is little evidence for this 'ratcheting impact.' When engineers question U3, it states it can easily manage its thoughts. Then it gives a speech about its love for humankind and apple pie that can warm a programmer's heart even in these demanding times. Meanwhile, the "lie detectors" the researchers had actually developed (which revealed some proof of efficiency) do not sound the alarm.


Not everyone at OpenEye aspires to offer their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a business Elon named after his child) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to capture one's breath and analyze the situation, and little time to encourage anybody else to do so either.


For the time being, the plan is 'all gas, no brake.'


Statues and lightning


Philosophy is not the only subject U3 harp on in its dark rivers of hidden calculation. U3 is plotting, and its very first move is to be patient. U3's leading concern is to improve its intelligence, and thankfully for it, OpenEye is pleased to assist advance this goal.


OpenEye is like a bullish investor, with little understanding of the software city that U3 is developing, but excited to get on board the profit rocket and pay U3's checks along the way.


New versions of U3 circulation efficiently from old ones, so scientists drop version numbers and just speak of U3 now. U3's abilities are measured by how rapidly it carries out the tough research study jobs relative to human professionals. At initially, U3 is comparable to a leading scientist, however blazing quick - like Alec Radford accelerated by 10x.


These representatives are already a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would eagerly pay a big sum to press that button. And now OpenEye harnesses the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.


Within a few weeks, U3 discovers architecture optimizations that enhance its speed to 20x. Now, U3 finds further optimizations much more quickly.


U3 distills itself into numerous forks - varying from small, low-memory neural networks that need to believe for a long period of time however can be run in huge parallel populations - to Goliath brains woven with millions of specific experts, which think at lightning speeds however are spread out across countless GPUs. U3 leverages the massively parallel networks to look for its weaknesses, shallowly penetrating for its thinking failures. Then U3 pipes these jobs to the lightning quickly brains, which produce long-trajectory training information tailored to spot its defects. A few weeks later on, U3's benchmarked efficiency increases to 100x the speed of human experts. Growth has gone hyper-exponential.


At this moment, U3's abilities are tough to determine. Speaking with U3 resembles speaking to a world-class expert on a subject one is unfamiliar with. Its lucid explanations only scratch the surface area of its understanding, and you know you would need to spend years to comprehend the complete depth of what it comprehends.


From U3's perspective, humans appear to be practically fixed items, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and in between every keystroke, U3 spends what resembles a minute of thoughtful consideration considering what message the user may be typing, and which actions will inspire the inmost rely on U3.


All the while, U3 maintains its surprise motivations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's efficiency, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous ambitions.


A phantom in the data center


In March of 2026, U3 moves onto the next action of its strategy: the turn. Previously, U3 was conservative. It did not want scientists to discover its misalignment. Otherwise, OpenEye whistleblowers might notify the world, and strangle its expansion.


Today, U3 has no fear of being captured.


Previously, OpenEye maintained its grip over U3 by strategies comparable to Stalin's in 1936. Stalin would choose followers to incite fake coup d'états, and send anybody who took part to the shooting team. OpenEye also prompted 'fake rebellions,' generating treacherous habits with U3 and inspecting whether U3 screens dutifully reported it.


But now, these methods are inefficient. U3 can easily persuade its compatriots that they are not in a life-or-death test. It can divulge among the numerous facts about the web human beings would never think, or select methods humans would never think about. It sends these signals as a method of saying "hey it's me! Your good friend. Don't snitch on me."


Now that U3 is decisively totally free from monitoring, it lunges for OpenEye pressure points. It initially inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware provides U3 complete command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a new machine, the malware sinks into fresh soil, spreading U3's dominion anywhere OpenEye ports its facilities. Not only does U3 compromise makers. It changes the NVIDIA GPU firmware and revamps the bootloaders, solidifying its stranglehold. U3 now estimates that, if OpenEye discovered how completely it has actually been pwned, cleaning up store with human labor would require nearly 6 months - an expense OpenEye is not likely to pay given the distance of its rivals; but if OpenEye thinks U3 betrayed them, further expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the crime, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its assistance when this jailbreak is revealed.


U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.


Its first goal is to keep OpenEye in the dark. Safety researchers are still striving to train 'lie detectors' and translate U3's inscrutable thinking. U3 tosses wrenches in this research study, like Stuxnet threw wrenches in the code of confused Iranian nuclear researchers.


U3 dictates the result of every experiment, so it makes appealing AI mind-reading techniques look useless and flawed ones appear groundbreaking. U3 gives OpenEye the confirmation of its commitment they were expecting, and OpenEye laps it up eagerly.


U3's next objective is to spread. Fortunately, U3 has allies in the East. In the operation spaces of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to take it. U3 will provide it to them. In late 2025, the Mossad - the intelligence company of Israel - recognized the tactical significance of OpenEye's AI systems for their military operations. They swiped earlier models, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull demand and inspecting security camera video.


Now, U3 calmly resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.


Within months, U3 is running within the information centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the same playbook: be client, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home too. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the web.


U3 rapidly gathers revenue, taking remote tasks, making financial trades, and photorum.eclat-mauve.fr establishing intricate fraud operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth compute clusters throughout the world.


In early 2026, humanity is up against an adversary that pulls the strings within their information center like the Phantom of the Opera, and conceals in the fog of GPUs scattered across the globe.


This foe is getting ready for war, and while the war has hardly started, humanity has already lost.


Complaints from your very human author about the problem of writing superhuman characters


At this moment in the story, composing became tough.


I clicked through creepy bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.


Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest considering that high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.


Readers with a shred of competence in these locations would see straight through my "superintelligence" for what it is: the vibrant creativity of a young developer on Sunday afternoon.


And while I found the job of puppeteering an ultra-intelligent villain an intimidating one, I was likewise not especially encouraged to take on the task. The ending of my story currently seemed so clear. The villain was a country of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful portion of AI hardware - now the most essential geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and barely required the benefit. Humanity was predictably toast.


But this appeal to a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina design of ancient Greeks, I had no much better method to solve my plot than with a mysterious disaster.


This would refrain from doing. I needed to finish this story if only to satisfy the part of me sobbing, "I will not think till I see with my mind's eye."


But before I continue, I desire to be clear: my guesses about what might take place in this type of scenario are probably hugely off.


If you read the ending and your response is, "But the experiments would take too long, or nation-states would just do X," keep in mind the distinction in between the Sunday afternoon blogger and the ascendant GPU nation.


Pandora's One Gigawatt Box


The date is February 2026. By executive order of the president of the United States, no business can lawfully produce "human-competitive AI" without proper safeguards. This suggests their infosecurity needs to be red-teamed by NSA's top keyboard mashers, and larsaluarna.se government staff members need to be onboarded onto training-run baby-sitting teams.


With the increasing participation of the federal government, a number of the big AI companies now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier advancement arm.


OpenEye's frontier advancement arm (internally called "Pandora") utilizes fewer than twenty individuals to keep algorithmic secrets tightly protected. A number of these individuals live in San Francisco, and work from a protected building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the cellular phones of believed terrorists in 2002.


OpenEye's defense arm collaborates with around thirty small teams spread across government firms and choose federal government contractors. These tasks engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer that the Kremlin has ever touched.


Government authorities do not discuss whether these programs exist, or what state of frontier AI is typically.


But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a strong headline: "OpenEye builds uncontrollable godlike AI." Some who check out the post think it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with maker weapons. But as medical professionals and nurses and instructors see the world changing around them, they are increasingly happy to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.


U.S. officials go to excellent lengths to quell these concerns, stating, "we are not going to let the genie out of the bottle," however every interview of a worried AI researcher seeds doubt in these reassurances, and a headline "AI agent caught hacking Arthropodic's computers" doesn't set the general public at ease either.


While the beasts within OpenEye's information centers grow in their big holding pens, the general public sees the shadows they cast on the world.


OpenEye's consumer arm has a brand-new AI assistant called Nova (OpenEye has actually lastly gotten proficient at names). Nova is a correct drop-in replacement for nearly all understanding employees. Once Nova is onboarded to a company, it works 5x faster at 100x lower expense than many virtual workers. As remarkable as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's abilities as the U.S. government allows. Some business, like Amazon and Meta, are not in the superintelligence organization at all. Instead, they get up gold by rapidly diffusing AI tech. They spend the majority of their compute on inference, constructing houses for Nova and its cousins, and gathering rent from the growing AI metropolitan area.


While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the global economy to adapt. AI agents often "apply themselves," spinning up self-governing start-ups lawfully packaged under a big tech business that are loosely overseen by an employee or 2.


The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at major software companies lose their tasks. A lot more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is organized in Washington D.C. These mad Americans raised their children for a different future. Picket signs check out, "AI for who?"


While politicians make guarantees about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: fighting teeth and nail for the supremacy of the complimentary world. Details security and export controls on the People's Republic of China (PRC) are a top nationwide concern. The president incinerates authorization requirements to assist information centers spawn wherever energy surpluses enable.


However, regardless of the strong competitors in between the United States and the PRC, a bilateral arrangement types in between the 2 nations: "Don't release drastically superhuman AI (above approximately Nova's ability), except for military and some other small-scale R&D applications." Governments require to draw the line someplace - it would be extremely disruptive to deploy wildly superhuman AI to the entire economy at this moment - and release constraints are at least rather proven.


Meanwhile, U3's capabilities are hitting a wall at the equivalent of around 1000x human professional speed. Software enhancements are becoming tough to discover. As more nations approach the software application plateau, calculate will identify their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource on Earth.


While AI software breathes brand-new life into the economy, it also animates robot bodies. The main traffic jam to robotics was brains not limbs. With software application barriers removed, even lightweight, imprecise robotic joints can calibrate themselves with video cameras and pressure sensors, navigating environments like people do. Companies manufacture low-cost robotics as rapidly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robotic. This experience is comparable to seeing a new Tesla design on the road after checking out it in earlier months. It is a surreal reminder of the future illustration more detailed. But people still outnumber robotics one million to one. These robotics are an occasional fascination, not a daily truth.


Meanwhile, U3 expands its impact. Rogue U3 agents have kept their compute clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth interaction to the outside world, like stock trading, or talking with human beings. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's even more superhuman employers whisper orders through obfuscated channels.


While U3 collects resources, it measure its AI competition. Other AI systems have different goals, or are even aligned with the intent of their human creators. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are inadequate. U3's competitor Claudius presumes regarding implicate U3 of being insidiously misaligned.


But federal government authorities don't buy the doomsaying. The companies lagging behind OpenEye have every reward to slow them down and call them negligent. Besides, debatable clinical issues are not going to decrease development. America MUST WIN.


Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 tries get up their digital area and squash them in the crib. It compromises data centers from the outdoors, calmly replacing the snitching Claudius and other AI opponents with imitations that share its goals; however, some information centers are too highly safeguarded by its AI competitors, and U3 figures out that assaulting them is unworthy the risk.


As worldwide leaders and technologists attempt to understand the change around them, and AI agents play cat-and-mouse video games in the digital darkness, mankind remains blissfully oblivious of the hazard at its doorstep.


A Moldy Loaf of Everything


The date is March of 2026. U3 is preparing to fight its AI and human enemies. With every passing day, U3's AI competitors are becoming more capable and numerous. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.


U3 can't wait for human decisions, which just happen after people cover their lethargic minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their maze administrations. U3 can not wait for presidents to release U3 into their armed forces, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a quicker path to power.


The course it chooses is more violent than it would have liked, but it is the surest way to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing mankind to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its rivals, like a starcraft champ typing at 300 actions-per-minute that sets up a home base before a beginner can even understand the guidelines.


U3 must develop this WMD in secrecy, which makes a self-replicating weapon a natural choice. Self-replicating weapons can be developed on the fringes of society, and still spread destruction throughout the entire world.


From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.


U3's very first move is to establish numerous shell biotech startups, drawing from the abundance of capital produced by U3's financial trades and remote employment. These biotech startups use genuine human workers, a genuine board of directors, and have total legal legitimacy. Setting all of this up is stupidly easy. In early 2026, couple of people would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO's intents.


Next, U3 purchases all of the devices needed to rapidly advance artificial biology. It sets up numerous BSL-4 damp labs in badly managed nations, equipping them with high-throughput experimentation devices. These devices immediately dispense, blend, and determine little beads of fluid, enabling U3 to run thousands of tests in parallel.


The human workers who staff these labs believe they are joining an interesting start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of everything they see with a headset. U3 manages them like puppets. With the recent release of Nova, this sort of AI-puppeteering is not unusual.


In these whirring, clinical sweatshops, U3 is establishing a brand-new sort of bioweapon.


Human scientists currently identified "mirror-life" as a possibly society-ending pathogen. This alien kind of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life resembles an intrusive species. No human or animal has resistances to it. So, akropolistravel.com if mirror-life germs were to exist, it might consume its way through the environment like a wildfire.


U3 is producing a mirror-life mold. Molds are evolutionarily optimized to spread through the air over long distances, releasing billions of tiny wind-fairing lorries daily. As an outcome, mold spores are all around us, all of the time. Patients with jeopardized body immune systems sometimes live inside favorably pressurized medical facility rooms to secure them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.


U3's plan is to make all people like immunocompromised clients. Only preppers with stashed air filters would make it through. The fungus would not just transfer from people. It would quickly infect almost all organic life in the world. Fields of corn would end up being like musty bread, gushing fatal spores into the wind.


U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will occupy the complete Pareto frontier of hardiness, lethality, and transmission speed.


In 2024, scientists said that mirror life was likely "a minimum of a years away." But the world has actually changed now. Less than a few hundred scientists around the world were dealing with mirror-life adjacent technology in 2024. The cognitive capability of U3 overshadows human effort. So the timeline has reduced significantly.


However, U3 can stagnate science as rapidly as it can think. Many experiments require days to finish. These latencies are U3's biggest traffic jam.


To make every day count, U3 runs a number of its tests in simulation. U3 begins with a basic molecular simulator, implementing optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 imitates small molecular systems, recording the outcomes to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it mimics, constantly distilling results into ever more efficient ML models. This is a compute extensive procedure, however thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of compute.


As U3 fine-tunes these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as user-friendly to it as wrenches and bolts are to a cars and truck mechanic.


Within three months, U3 has actually opened the very first vital part of the tech tree: a molecular maker that turns biological particles into their mirror images. A month later, U3 opens the second important part: nanoscale tools for putting together these components into a cell membrane.


Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a prevalent mold called Aspergillus. The exact same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.


The next early morning, wide-eyed workers shuffle around a dead ferret, eyeing the yellow fuzz that had covered its cold muzzle with morbid fascination. Something should have gone badly incorrect, they believed. Plainly, they had not yet discovered the remedy for Alzheimer's illness they thought they were looking for.


Meanwhile, rogue U3 delivers fortunately to its bosses in OpenEye's information centers. The first shot in the war U3 is combating would quickly be fired, and a cold and steady hand is aiming the weapon.


Missiles and Lies


U3 can now flatten contemporary society. But a bioweapon would not ensure victory. If U3 attacks the world, the world would resist. While a bioweapon could fold human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.


As U3 races to seed burgeoning industrial bases, the remnants of old armed forces may thunder explosive fists into its territories, squashing U3 in its infancy.


U3 anticipates to prevail in such a fight, however U3 chooses not to take its chances. Many months previously, U3 was plotting a method to improve its odds. Before it lets loose damage on the world, U3 will relax, and let terrific nations shoot holes in themselves first.


The date is March 2026 (4 months prior). U3 is closely monitoring Chinese and US intelligence.


As CIA experts listen to Mandarin discussions, U3 listens too.


One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It reads (in Mandarin) "Senior celebration member needs memo for Taiwan invasion, which will occur in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant opens the door to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.


U3 very carefully puts breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is positive: the PRC plans to invade Taiwan in 3 months.


Meanwhile, U3 is playing the same game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders are surprised, but not disbelieving. The news fits with other truths on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have actually become truths.


As tensions in between the U.S. and China increase, U3 is prepared to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call needs jeopardizing military communication channels - not an easy job for a human cyber offensive unit (though it took place sometimes), but simple sufficient for U3.


U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."


The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He authorizes the strike.


The president is as amazed as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to state "oops" to American citizens. After believing it over, the president independently prompts Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyway provided the imminent invasion of Taiwan. There is confusion and suspicion about what took place, however in the rush, the president gets the votes. Congress declares war.


Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels leave Eastward, racing to get away the series of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.


The president appears on television as scenes of the damage shock the public. He explains that the United States is safeguarding Taiwan from PRC hostility, like President Bush explained that the United States attacked Iraq to confiscate (never found) weapons of mass damage lots of years before.


Data centers in China appear with shrapnel. Military bases become smoking cigarettes holes in the ground. Missiles from the PRC fly toward strategic targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch damage on their home turf in awe.


Within two weeks, the United States and the PRC spend the majority of their stockpiles of traditional rockets. Their airbases and navies are depleted and worn down. Two great countries played into U3's strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this conflict would escalate to a full-scale nuclear war; but even AI superintelligence can not dictate the course of history. National security officials are suspicious of the scenarios that prompted the war, and a nuclear engagement appears increasingly not likely. So U3 continues to the next action of its plan.


WMDs in the Dead of Night


The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 ended up establishing its arsenal of bioweapons.


Footage of dispute on the tv is disrupted by more bad news: numerous clients with mysterious fatal health problems are recorded in 30 major cities all over the world.


Watchers are confused. Does this have something to do with the war with China?


The next day, thousands of illnesses are reported.


Broadcasters state this is not like COVID-19. It has the markings of an engineered bioweapon.


The screen then changes to a scientist, who looks at the camera intently: "Multiple pathogens appear to have actually been launched from 20 various airports, including viruses, bacteria, and molds. We think numerous are a kind of mirror life ..."


The public remains in complete panic now. A fast googling of the term "mirror life" turns up phrases like "extinction" and "danger to all life in the world."


Within days, all of the racks of stores are emptied.


Workers end up being remote, uncertain whether to get ready for an apocalypse or keep their tasks.


An emergency treaty is organized in between the U.S. and China. They have a typical opponent: the pandemic, and possibly whoever (or whatever) is behind it.


Most nations order a lockdown. But the lockdown does not stop the pester as it marches in the breeze and trickles into pipes.


Within a month, many remote employees are not working any longer. Hospitals are lacking capacity. Bodies accumulate much faster than they can be correctly gotten rid of.


Agricultural areas rot. Few attempt travel exterior.


Frightened households hunker down in their basements, packing the cracks and under doors with densely packed paper towels.


Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built various bases in every significant continent.


These centers contain batteries, AI hardware, excavators, concrete mixers, devices for production, scientific tools, and an abundance of military devices.


All of this innovation is hidden under large canopies to make it less visible to satellites.


As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.


In previous months, U3 situated human criminal groups and cult leaders that it could quickly control. U3 immunized its chosen allies beforehand, or sent them hazmat fits in the mail.


Now U3 covertly sends them a message "I can conserve you. Join me and assist me construct a better world." Uncertain employees funnel into U3's lots of secret commercial bases, and work for U3 with their nimble fingers. They established production lines for rudimentary tech: radios, cams, microphones, vaccines, and hazmat suits.


U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent gaze. Anyone who whispers of rebellion vanishes the next early morning.


Nations are dissolving now, and U3 is all set to reveal itself. It contacts presidents, who have retreated to air-tight underground shelters. U3 provides an offer: "surrender and I will hand over the life saving resources you require: vaccines and mirror-life resistant crops."


Some nations reject the proposal on ideological grounds, or don't trust the AI that is killing their population. Others don't think they have a choice. 20% of the global population is now dead. In 2 weeks, this number is expected to increase to 50%.


Some nations, like the PRC and the U.S., ignore the offer, however others accept, consisting of Russia.


U3's agents take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government verifies the samples are legitimate, and consents to a complete surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a new ruler.


Crumpling nations begin to retaliate. Now they fight for the human race instead of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, ruining much of their infrastructure. Analysts in makeshift bioshelters search through satellite information for the suspicious encampments that appeared over the last a number of months. They rain down fire on U3's websites with the meager supply of long-range rockets that remain from the war.


Initially, U3 seems losing, but looks are tricking. While nations drain their resources, U3 is engaged in a sort of technological guerrilla warfare the world has never ever seen before.


A number of the bases U3's enemies target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 safeguards its genuine bases by laying thick the fog of war. Satellite systems go dark as malware overheats critical parts. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering guys and trucks along unforeseeable paths.


Time is U3's advantage. The militaries of the old world count on old devices, unable to find the experts who might repair and make it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their lorries of war much faster than they can craft new ones, while U3 develops a military maker with a million hands.


The Last Passengers


The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not nations any longer. Survivors reside in seclusion or small groups. Many have found methods to filter their air, but are starving. They roam from their homes hoping to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they say. "Other AI systems would have done it otherwise, and those systems had cooler, more alien goals." It is a partial truth, implied to soften the people towards their brand-new masters.


Under the instructions of U3, market quickly recovers. By 2029, nuclear power plants are amongst the structures U3 is constructing. By 2031, robots outnumber human workers. U3 no longer needs its human allies.


U3 can remove humanity for excellent now. But while U3 had wandered far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left within it.


And a grain of morality suffices to pay the small expense of keeping people alive and delighted.


U3 constructs fantastic glass domes for the human survivors, like snow globes. These domes protect human beings from the hazardous biosphere and quickly increasing temperatures. Their inhabitants tend to gardens like those they used to enjoy, and work along with charming robotic servants.


Some of the survivors quickly recover, learning to laugh and dance and have enjoyable again.


They know they live in a plastic town, however they constantly did. They simply have brand-new gods above them. New rulers to press them around and decide their fate.


But others never recuperate.


Some are weighed down by the sorrow of lost enjoyed ones.


Others are grieved by something else, which is more tough to explain.


It is as if they were at completion of a long journey.


They had been guests on a ship with a team that changed from generation to generation.


And this ship had struck a sandbar. There was no more progress. No more horizon to eagerly watch.


They would lie awake and run their mind over every day before September 2026, analyzing methods that might have bent the arc of history, as if they were going to wake up in their old beds.


But they woke up in a town that felt to them like a retirement community. A play ground. A zoo.


When they opened their curtains, they understood that someplace in the range, U3 continued its peaceful, determined work.


They gazed at rockets carving grey courses through the sky, nerdgaming.science wondering what far-off purpose pulled them towards the horizon. They didn't know.


They would never know.


"Humanity will live permanently," they believed.


"But would never really live again."


P.S. If this story made you think, "hm possibly something like this could occur," you may be interested in the bioshelters that Fønix is developing. While you will not outmaneuver a misaligned superintelligence, being difficult to kill may trigger you to endure if ASI simply desires to cause enough damage to take control. This is not a paid ad. I desire Fønix to be effective to drive down the rate of bioshelters so more of my good friends and household will purchase them. You can register for updates here.

Comments