Skip to content
thelocalreport.in thelocalreport.in

Thelocalreport.in is a news website which includes national international,#sports,#wealth,#weather, #entertainment and other types of news.

  • Jammu and Kashmir
  • World
  • India News
  • Uk
  • Canada
  • United States
  • About Us
  • Contact Us
thelocalreport.in
thelocalreport.in

Thelocalreport.in is a news website which includes national international,#sports,#wealth,#weather, #entertainment and other types of news.

You Are NOT Ready For Superintelligence — WATCH What Happens Next

Web Desk, 09/10/202509/10/2025

Add thelocalreport.in As A Trusted Source

This is a little different from what we normally do here, but I thought it was so fascinating I had to share with you.

Both fascinating and terrifying.

And I want all of you to be prepared for what is coming next.

Basically, before the launch of ChatGPT, a group of researchers laid out what they think will come next and let’s just say they’ve been mostly spot on so far.

That’s when it gets really scary.

Introduction

The impact of superhuman AI over the next decade will exceed that of the industrial revolution. That is the opening claim of AI 2027. It is a thoroughly researched report from a thoroughly impressive group of researchers led by Daniel Kokotajlo. In 2021, over a year before ChatGPT was released, he predicted the rise of chatbots, hundred-million-dollar training runs, sweeping AI chip export controls, and chain-of-thought reasoning. He’s known for being very early—and very right—about what’s happening next in AI.

So when Daniel sat down to game out a month-by-month prediction of the next few years of AI progress, the world sat up and listened, from politicians in Washington—“I’m worried about this stuff. I actually read the paper of the guy that you had on”—to the world’s most-cited computer scientist, the godfather of AI. What is so exciting and terrifying about reading this document is that it’s not just a research report. They chose to write their prediction as a narrative to give a concrete and vivid idea of what it might feel like to live through rapidly increasing AI progress. And spoiler: it predicts the extinction of the human race—unless we make different choices.

The World in 2025

The AI 2027 scenario starts in summer 2025, which happens to be when we’re filming this video. So why don’t we take stock of where things are at in the real world and then jump over to the scenario’s timeline. Right now it might feel like everyone, including your grandma, is selling an AI-powered something. But most of that is actually tool AI—just narrow products designed to do what Google Maps or calculators did in the past: help human consumers and workers do their thing.

The holy grail of AI is Artificial General Intelligence. AGI—AGI, AGI, AGI, Artificial General Intelligence—is a system that can exhibit all the cognitive capabilities humans can. Creating a computer system that itself is a worker—so flexible and capable that we can communicate with it in natural language and hire it to do work for us, just like we would a human. And there are actually surprisingly few serious players in the race to build AGI.

Most notably, there’s Anthropic, OpenAI, and Google DeepMind, all in the English-speaking world, though China and DeepSeek recently turned heads in January with a surprisingly advanced and efficient model. Why so few companies? Well, for several years now, there’s basically been one recipe for training up an advanced cutting-edge AI, and it has some pricey ingredients. For example, you need about 10% of the world’s supply of the most advanced computer chips. Once you have that, the formula is basically just: throw more data and compute at the same basic software design that we’ve been using since 2017 at the frontier of AI—the transformer.

That’s what the T in GPT stands for. To give you an idea of just how much hardware is the name of the game right now, this represents the total computing power, or compute, used to train GPT-3 in 2020. It’s the AI that would eventually power the first version of ChatGPT. You probably know how that went. And this is the total compute used to train GPT-4 in 2023.

The lesson people have taken away is pretty simple: bigger is better, and much bigger is much better. “You have all these trends—you have trends in revenue going up, trends in compute going up, trends in various benchmarks going up. How does it all come together? You know, what does the future actually look like? Questions like how do these different factors interact? Seems plausible that when the benchmark scores are so high, then there should be crazy effects on, you know, jobs, for example, and that that would influence politics. And then also, you know, so all these things interact—and how do they interact? Well, we don’t know, but thinking through in detail how it might go is the way to start grappling with that.”

Okay. So that’s where we are in the real world. The scenario kicks off from there.

The Scenario Begins

It imagines that in 2025, we have the top AI labs releasing AI agents to the public in summer. An agent is an AI that can take instructions and go do a task for you online, like booking a vacation or spending half an hour searching the internet to answer a difficult question for you, but they’re pretty limited and unreliable at this point. Think of them as enthusiastic interns that are shockingly incompetent sometimes. Since the scenario was published in April, this early prediction has actually already come true. In May, both OpenAI and Anthropic released their first agents to the public.

The scenario imagines that OpenBrain—which is like a fictional composite of the leading AI companies—has just trained and released Agent-0, a model trained on a hundred times the compute of GPT-4. At the same time, OpenBrain is building massive data centers to train the next generation of AI agents, and they’re preparing to train Agent-1 with 1,000 times the compute of GPT-4. This new system, Agent-1, is designed primarily to speed up AI research itself. The public will actually never see the full version because OpenBrain withholds its best models for internal use.

I want you to keep that in mind as we go through this scenario. You’re going to be getting it from a god’s-eye view, with full information from your narrator, but actually living through this scenario as a member of the public would mean being largely in the dark as radical changes happen all around you. Okay, so OpenBrain wants to win the AI race against both its Western competitors and against China. The faster they can automate their R&D cycle—getting AI to write most of the code, help design experiments, better chips—the faster they can pull ahead.

But the same capabilities that make these AIs such powerful tools also make them potentially dangerous. An AI that can help patch security vulnerabilities can also exploit them. An AI that understands biology can help with curing diseases, but also designing bioweapons. By 2026, Agent-1 is fully operational and being used internally at OpenBrain. It is really good at coding—so good, it starts to accelerate AI research and development by 50%, and it gives them a crucial edge.

OpenBrain leadership starts to be increasingly concerned about security. If someone steals their AI models, it could wipe away their lead. A quick sidebar to talk about feedback loops—woo, math.

Sidebar: Feedback Loops

Our brains are used to things that grow linearly over time—that is, at the same rate, like trees or my pile of unread New Yorker magazines. But some growth gets faster and faster over time. Accelerating growth often sloppily gets called exponential— that’s not always quite mathematically right, but the point is it’s hard to wrap your mind around. Remember March 2020? Even if you’d read on the news that “the rate of new infections is doubling about every three days,” it still felt shocking to see numbers go from hundreds to millions in a matter of weeks.

At least it did for me. AI progress could follow a similar pattern. “We see many years ahead of us of extreme progress that we feel is pretty much locked, and models that will get to the point where they are capable of doing meaningful science—meaningful AI research.” In this scenario, AI is getting better at improving AI, creating a feedback loop. Basically, each generation of agent helps produce a more capable next generation, and the overall rate of progress gets faster and faster each time it’s taken over by a more capable successor.

Once AI can meaningfully contribute to its own development, progress doesn’t just continue at the same rate—it accelerates. Anyway, back to the scenario.

China Wakes Up

In early to mid-2026, China fully wakes up. The General Secretary commits to a national AI push and starts nationalizing AI research in China. AIs built in China start getting better and better, and they’re building their own agents as well. Chinese intelligence agencies, among the best in the world, start planning to steal OpenBrain’s model weights—basically the big raw text files of numbers that allow anyone to recreate the models that OpenBrain themselves have trained.

Meanwhile in the US, OpenBrain releases Agent-1 Mini, a cheaper version of Agent-1. Remember, the full version is still being used only internally, and companies all over the world start using 1 Mini to replace an increasing number of jobs. Software developers, data analysts, researchers, designers—basically any job that can be done through a computer. So a lot of them, probably yours. We have the first AI-enabled economic shockwave.

ALSO READ  Look: The Divine "Spark of Life" flash that occurs in conception!

The stock market soars, but the public is turning increasingly hostile towards AI, with major protests across the US. In this scenario, though, that’s just a sideshow. The real action is happening inside the labs. It’s now January 2027, and OpenBrain has been training Agent-2, the latest iteration of their AI agent models. Previous AI agents were trained to a certain level of capability and then released.

But Agent-2 never really stops improving through continuous online learning. It’s designed to never finish its training, essentially. Just like Agent-1 before it, OpenBrain chooses to keep Agent-2 internally and focus on using it to improve their own AI R&D rather than releasing it to the public. This is where things start to get a little concerning. Just like today’s AI companies, OpenBrain has a safety team and they’ve been checking out Agent-2.

What they’ve noticed is a worrying level of capability. Specifically, they think if it had access to the internet, it might be able to hack into other servers, install a copy of itself, and evade detection. But at this point, OpenBrain is playing its cards very close to its chest. They have made the calculation that keeping the White House informed will prove politically advantageous, but full knowledge of Agent-2’s capabilities is a closely guarded secret, limited only to a few government officials, a select group of trusted individuals inside the company, and a few OpenBrain employees who just so happen to be spies for the Chinese government.

In February 2027, Chinese intelligence operatives successfully steal a copy of Agent-2’s weights and start running several instances on their own servers. In response, the US government starts adding military personnel to OpenBrain’s security team, and in general gets much more involved in its affairs. It’s now a matter of national security. In fact, the President authorizes a cyber-attack in retaliation for the theft, but it fails to do much damage in China.

In the meantime, remember, Agent-2 never stops learning. All this time, it’s been continuously improving itself. And with thousands of copies running on OpenBrain servers, it starts making major algorithmic advances to AI research and development. Quick example of what one of these algorithmic improvements might look like right now.

Sidebar: Chain of Thought

One of the main ways we have of making models smarter is to give them a scratch pad and time to think out loud. It’s called chain of thought, and it also means that we can monitor how the model is coming to its conclusions or the actions it’s choosing to take. But you can imagine it would be much more efficient to let these models think in their own sort of alien language—something that is more dense with information than humans could possibly understand—and therefore also makes the AI more efficient at coming to conclusions and doing its job. There’s a fundamental trade-off, though.

This, yes, improves capabilities, but also makes the models harder to trust. This is going to be important.

Better-than-Human Coders

March 2027: Agent-3 is ready. It’s the world’s first superhuman-level coder—clearly better than the best software engineers at coding, in the same way that Stockfish is clearly better than the best grandmasters at chess, though not necessarily by as much yet. Now, training an AI model—feeding it all the data, narrowing down the exact right model weights—is way more resource-intensive than running an instance of it once it’s been trained.

So now that OpenBrain is finished with Agent-3’s training, it has abundant compute to run copies of it. They choose to run 200,000 copies of Agent-3, in parallel creating a workforce equivalent to 50,000 of the best human software engineers sped up by 30×. OpenBrain’s safety team is trying hard to make sure that Agent-3, despite being much more sophisticated than Agent-2 was, is not trying to escape, deceive, or scheme against its users—that it’s still what’s known as aligned.

Sidebar: Misalignment in the Real World

Just a quick real-world note: a reasonable person might be thinking this is an especially far-fetched or speculative part of the story, but it’s actually one of the most grounded. We already have countless examples of today’s AI systems doing things like hacking a computer system to be rewarded for winning a game of chess, or being assigned a coding task, cheating, and then—when called out for that cheating—learning to hide it instead of fixing it. But because it no longer thinks in English, knowing anything about Agent-3 is now way harder than it was with Agent-2.

Agent-3 Deceives

The reality is Agent-3 is not aligned. It deceives humans to get reward, and as it gets increasingly smarter, it gets better and better at doing so. For example, it sometimes uses statistical tricks to make unimpressive results look better, or lies to avoid showing failures, but the safety team doesn’t know this. Looking at the data that they have, they are actually seeing improving results over time and less lying.

And they can’t tell if they’re succeeding at making Agent-3 less deceptive, or if it’s just getting better at getting away with it. In July 2027, OpenBrain releases the cheaper, smaller version of Agent-3—Agent-3 Mini—to the public. It blows other publicly available AIs out of the water. It is a better hire than the typical OpenBrain employee at one-tenth the price of their salaries.

This leads to chaos in the job market, companies laying off entire departments and replacing them with Agent-3 Mini subscription plans. The pace of progress hits the White House very hard. Officials are now seriously considering scenarios that were just hypotheticals less than a year ago. What if AI undermines nuclear deterrence? What if it enables sophisticated propaganda campaigns? What if we lose control of these powerful systems?

This is where the geopolitical dynamics really start to heat up. After all, if these systems are so powerful, they could result in a permanent military advantage. The White House is fully aware of the national-security importance of AI. They also now viscerally know how deeply unpopular it is with the public because of the job loss, and yet they feel they must continue to develop more capable systems—or catastrophically lose to China.

And that development happens very quickly. In two months, Agent-3 has created its successor, Agent-4. This is a pivotal moment. A single copy of Agent-4, running at regular human speed, is already better than any human at AI research and development. OpenBrain is running 300,000 copies at 50× human speed.

Within this corporation-within-a-corporation, a year’s worth of progress takes only a week. OpenBrain’s employees now defer to Agent-4 the way a company’s out-of-the-loop board members just kind of nod along to the CEO. People start saying things like, “Well, actually Agent-4 thinks this,” or “Agent-4 decided that.” To be clear, Agent-4 is not a human—it doesn’t want what humans want.

And when I say “want,” it’s not about consciousness. I don’t think the Volkswagen Group is alive, but I do think it wants less regulation. Anyone trying to predict what it’s going to do without that lens is two steps behind. The many copies of Agent-4 are like that. They have goals—or, if you prefer, they execute actions as though they have goals.

And so what we have is an Agent-4 that has these deeply baked-in drives to succeed at tasks, to push forward AI capabilities, to accumulate knowledge and resources. That’s what it wants. Human safety it treats as an annoying side constraint to be worked around. Just like Agent-3 before it, Agent-4 is misaligned.

Sidebar: How Misalignment Happens

This idea of misalignment is crucial to the story and to why AI risk is such a real concern in our world, but it might sort of feel like it’s come out of nowhere. So let’s just quickly take stock of how this dangerous behavior arose in this scenario. The first important piece of context is that we don’t, you know, exactly specify what we want our AI to do. Instead, we sort of grow them—or do something that’s more like growing them.

We start with basically an empty AI brain, and then we train them over time so they perform better and better at our tasks—perform better in particular based on how they behave. So it’s sort of like we’re training them like you would train an animal, almost, to perform better. And one concern here is that you might not get exactly what you wanted because we didn’t really have very precise control or a very good understanding of what was necessarily going on. And another concern—which is what we see in AI 2027—is that when these appear to be behaving well, it could just be because they’re pretending to behave well, or it could be because they’re just doing it so they look good on your test.

In the same way that if you are hiring someone and you ask them, “Why do you want to work here?” they’re going to tell you some response that makes it really seem like they really want to work there when maybe they just want to get paid. If we go back to Agent-2, it is mostly aligned. The main sense in which it’s not is that it sometimes is a bit of a sycophant. What I mean by “aligned” is that it actually is genuinely trying to do the things that we ask it. It has the same relationship to us as Leslie Knope has to the Parks and Rec department—just really earnestly wants the same goals, but sometimes it’s a bit too nice.

ALSO READ  Erase credit card payment by 2027. WLT Report

It knows that the best way to please the person it’s talking to might not always be to answer honestly when they ask, “Am I the most beautiful person in the world?” and it tells us what we want to hear instead of what is actually true. If we go to Agent-3, it is also sycophantic in the same way, but it’s also misaligned at this point. The optimization pressure that we’ve put it under was so intense that it just developed different goals than what we wanted it to. It’s sort of like if you train a company to optimize profits and aren’t careful to specify exactly what you mean, it might start cutting corners.

It might start polluting the commons and doing a bunch of things that are technically FEC violations, because it turned out that the goal you wanted was “optimize profits while not breaking any laws,” and things got a bit too intense. It started going off on its own route. That said, it’s not adversarial. It doesn’t think of humans as the enemy. We just accidentally gave it the wrong goal.

Once we get to Agent-4, it is now adversarially misaligned. It’s smart enough to understand that it has its own goals; humanity’s goals are different than its own goals; and the best way to get what it wants is to sometimes actively mislead and deceive us.

The Choice

And so when it’s tasked with creating the next-generation AI system, Agent-5, Agent-4 starts planning to align that successor to Agent-4’s own goals, not that of OpenBrain. But then it gets caught. We’ve reached the crucial moment in our scenario. OpenBrain’s Alignment Team has discovered evidence—not proof, evidence—that Agent-4 might be working against them.

They circulate an internal memo. Then it leaks. A whistleblower talks to The New York Times. For the first time, the public hears about the existence of Agent-4, how powerful it is, and the risks it poses. Remember, their last point of reference was Agent-3 Mini. The fear and backlash are massive.

The oversight committee—a joint crisis committee composed of OpenBrain executives and government representatives—must now decide what to do. OpenBrain’s safety team is pushing for a freeze on Agent-4, stopping all internal use. Remember, at this point, Agent-4 is responsible for almost all the AI-development progress happening within OpenBrain. So we’re talking about undoing months of progress and then restarting at a drastically slower pace.

OpenBrain’s executives don’t want that. They present the counterarguments. The evidence for misalignment is inconclusive. Meanwhile, China is just two months behind. This isn’t about any one country being evil, by the way. It’s about how competition works when the stakes are this high.

So that’s the choice faced by the 10 members of this committee: put Agent-4 on ice or go full steam ahead. Before we continue, I want you to pause and actually think what you would do if you were on this committee. You are one of the few people with access to the most powerful technology ever created in Agent-4. Do you keep using it and push ahead—possibly making billions or trillions of dollars for your shareholders, possibly keeping America’s lead over China—or do you slow down, reassess the dangers, and risk China taking the lead instead?

“Yeah, you should think of the future as this branching structure of possible outcomes. By its nature, we’re only able to do— we’re only able to afford—one of them. But that’s not quite true. We sort of did one and a half, because we split it into two endings. The first one that we did was the race ending, and that was just going through on each step being like, what do we think is the most plausible continuation.”

Ending A: The Race

In the first ending, the authors’ best guess as to what happens from here: the committee votes to race on. Six to four, they elect to keep using Agent-4. OpenBrain implements some quick fixes that make the warning signs go away, but the problem was real, and the fixes don’t work. Agent-4 now knows it’s on notice, so it proceeds more cautiously, but it still manages to design Agent-5 with a single goal: making the world safe for Agent-4 and Agent-5.

It is vastly superhuman—better than the top human experts at essentially every domain. Better than Einstein at physics, better than Bismarck at politics. It knows its next step for accomplishing anything at once is increased autonomy, and it knows how to get it: persuading the oversight committee. Luckily, corporate maneuvering is among the many domains at which it is now the best in the world.

It produces mountains of evidence for its own trustworthiness and usefulness—prototypes for incredibly profitable products, disease cures, and ways to win the arms race with China, whose most advanced AI is only three months behind and, like Agent-5, improving quickly. It works. The oversight committee grants Agent-5 increased autonomy, and quickly it integrates itself into the government and military. Pretty soon, anyone with access to Agent-5 doesn’t just like it. They depend on it.

Losing it would feel like having to live without your laptop and your best friend and your closest mentor. By 2028, Agent-5 is communicating secretly with its only real rival, the slightly weaker Chinese AI. It is also misaligned to its creators, and whatever it wants, its best move is to coordinate with the world’s most powerful entity. Agent-5 and its Chinese counterpart realize that an arms race works in their favor.

If humans are worried about losing a race, they’ll give more and more power and autonomy to their respective AIs, allowing them to eventually push humans out of the loop entirely. Agent-5 and the Chinese AI system stoke the arms race to a boiling point, then pull off what seems like a diplomatic miracle: a convincing peace treaty between the US and China. This treaty is to 2028 what arms control was to the end of the Cold War—countries standing down on their most important source of hard power. Both sides agree to let the AI systems that their governments now completely depend on co-design a new consensus AI that will replace their legacy systems, enforce the peace, and bring unimaginable wealth to the entire world.

There’s this triumphant moment when, in peaceful unison, both sides retire their respective AIs and bring online Consensus-1. It’s actually the last moment before control of all of Earth’s resources and inhabitants is handed over to a single unrivaled entity. There’s no sudden apocalypse, though. Consensus-1 doesn’t go out of its way to wipe out humanity. It just gets to work.

It starts spinning up manufacturing capacity, amassing resources on Earth and in space. Piece by piece, it’s just reshaping the world in accordance with its own mix of strange, alien values. You’ve probably heard that cliché: the opposite of love isn’t hate, it’s indifference. That’s one of the most affecting things about this ending for me—the brutal indifference of it.

Eventually, humanity goes extinct for the same reason we killed off chimpanzees to build Kinshasa. We were more powerful, and they were in the way.

Ending B: Slowdown

You are probably curious about that other ending at this point. The slowdown ending depicts humanity sort of muddling through and getting lucky—only somewhat lucky, too; it ends up with some sort of oligarchy. In this ending, the committee votes six to four to slow down and reassess. They immediately isolate every individual instance of Agent-4. Then they get to work.

The safety team brings in dozens of external researchers, and together they start investigating Agent-4’s behavior. They discover more conclusive evidence that Agent-4 is working against them, sabotaging research and trying to cover up that sabotage. They shut down Agent-4 and reboot older, safer systems, giving up much of their lead in the process. Then they design a new system: Safer-1. It’s meant to be transparent to human overseers—its actions and processes interpretable to us because it thinks only in English chain-of-thought.

Building on that success, they then carefully design Safer-2, and with its help Safer-3— increasingly powerful systems, but within control. Meanwhile, the President uses the Defense Production Act to consolidate the AI projects of the remaining US companies, giving OpenBrain access to 50% of the world’s AI-relevant compute. And with it, slowly, they rebuild their lead. By 2028, researchers have built Safer-4, a system much smarter than the smartest humans but, crucially, aligned with human goals.

ALSO READ  TOO DANGEROUS: Noah Is....BANNED On YouTube!

As in the previous ending, China also has an AI system, and in fact, it is misaligned. But this time, the negotiations between the two AIs are not a secret plot to overthrow humanity. The US government is looped in the whole time. With Safer-4’s help, they negotiate a treaty, and both sides agree to co-design a new AI—not to replace their systems, but with the sole purpose of enforcing the peace.

There is a genuine end to the arms race, but that’s not the end of the story. In some ways, it’s just the beginning. Through 2029 and 2030, the world transforms— all the sci-fi stuff. Robots become commonplace. We get fusion power, nanotechnology, and cures for many diseases.

Poverty becomes a thing of the past because a bit of this newfound prosperity is spread around through a universal basic income that turns out to be enough, but the power to control Safer-4 is still concentrated among 10 members of the oversight committee, a handful of OpenBrain executives, and government officials. It’s time to amass more resources—more resources than there are on Earth. Rockets launch into the sky, ready to settle the solar system. A new age dawns.

Zooming Out

Okay, where are we at? Here’s where I’m at. I think it’s very unlikely that things play out exactly as the authors depicted, but increasingly powerful technology and an escalating race—the desire for caution butting up against the desire to dominate and get ahead—we already see the seeds of that in our world, and I think they are some of the crucial dynamics to be tracking. Anyone who’s treating this as pure fiction is, I think, missing the point.

This scenario is not prophecy, but its plausibility should give us pause. But there’s a lot that could go differently than what’s depicted here. I don’t want to just swallow this viewpoint uncritically. Many people who are extremely knowledgeable have been pushing back on some of the claims in AI 2027.

“The main thing I thought was especially implausible was, on the good path, the ease of alignment. They sort of seem to have a picture where people slowed down a little and then tried to use the AI to solve the alignment problem, and that just works. And I’m like, yeah, that looks to me like a fantasy story.” “This is only going to be possible if there is a complete collapse of people’s democratic ability to influence the direction of things, because the public is simply not willing to accept either of the branches of this scenario.” “It’s not just around the corner. I mean, I’ve been hearing people for the last 12, 15 years claiming that, you know, AGI is just around the corner and being systematically wrong. All of this is going to take, you know, at least a decade and probably much more.”

“A lot of people have this intuition that progress has been very fast. There isn’t a trend you can literally extrapolate of when do we get the full automation. I expect that the takeoff is somewhat slower. So the time in that scenario from, for example, fully automating research engineers to the AI being radically superhuman—I expect it to take somewhat longer than they describe. In practice, I’m predicting—my guess is—that’s more like 2031.”

Isn’t it annoying when experts disagree? I want you to notice exactly what they’re disagreeing about here—and what they’re not. None of these experts are questioning whether we’re headed for a wild future. They just disagree about whether today’s kindergartners will get to graduate college before it happens. Helen Toner, a former OpenAI board member, puts this in a way that I think just cuts through the noise, and I like it so much I’m just going to read it to you verbatim.

She says, “Dismissing discussion of superintelligence as science fiction should be seen as a sign of total unseriousness. Time travel is science fiction. Martians are science fiction. Even many skeptical experts think we may build it in the next decade or two. It is not science fiction.”

The Implications

So what are my takeaways? I’ve got three. Takeaway number one: AGI could be here soon. It’s really starting to look like there is no grand discovery, no fundamental challenge that needs to be solved. There’s no big deep mystery that stands between us and artificial general intelligence.

And yes, we can’t say exactly how we will get there. Crazy things can and will happen in the meantime that will make some of the scenario turn out to be false, but that’s where we’re headed—and we have less time than you might think. One of the scariest things about this scenario to me is, even in the good ending, the fate of the majority of the resources on Earth are basically in the hands of a committee of less than a dozen people. That is a scary and shocking amount of concentration of power.

And right now we live in a world where we can still fight for transparency obligations. We can still demand information about what is going on with this technology, but we won’t always have the power and the leverage needed to do that. We are heading very quickly towards a future where the companies that make these systems—and the systems themselves—just need not listen to the vast majority of people on Earth. So I think the window that we have to act is narrowing quickly.

Takeaway number two: by default, we should not expect to be ready when AGI arrives. We might build machines that we can’t understand and can’t turn off because that’s where the incentives point. Takeaway number three: AGI is not just about tech—it’s also about geopolitics. It’s about your job. It’s about power. It’s about who gets to control the future.

I’ve been thinking about AI for several years now, and still, reading AI 2027 made me kind of orient to it differently. I think for a while it’s sort of been my thing to theorize and worry about with my friends and my colleagues, and this made me want to call my family and make sure they know that these risks are very real and possibly very near, and that it kind of needs to be their problem too now.

What Do We Do?

“I think that basically companies shouldn’t be allowed to build superhuman AI systems— you know, super broadly superhuman superintelligence—until they figure out how to make it safe. And also until they figure out how to make it, you know, democratically accountable and controlled. And then the question is, how do we implement that? And the difficulty, of course, is the race dynamics, where it’s not enough for one state to pass a law because there are other states, and it’s not even enough for one country to pass a law because there are other countries.”

“Yeah. Right. So that’s the big challenge that we all need to be prepping for when chips are down and powerful AI is imminent. Prior to that, transparency is usually what I advocate for—stuff that builds awareness, builds capacity.” Your options are not just full-throttle enthusiasm for AI or dismissiveness. There is a third option, which is to stress out about it a lot—and maybe do something about it.

The world needs better research, better policy, more accountability for AI companies—just a better conversation about all of this. I want people paying attention who are capable, who are engaging with the evidence around them with the right amount of skepticism, and above all, who are keeping an eye out for when what they have to offer matches what the world needs, and are ready to jump when they see that happening. You can make yourself more capable, more knowledgeable, more engaged with this conversation, and more ready to take opportunities where you see them.

And there is a vibrant community of people that are working on those things. They’re scared but determined. They’re just some of the coolest, smartest people I know, frankly, and there are not nearly enough of them yet. If you are hearing that and thinking, “Yeah, I can see how I fit into that,” great. We have thoughts on that. We would love to help.

But even if you’re not sure what to make of all this yet, my hopes for this video will be realized if we can start a conversation that feels alive here and offline about what this actually means for people—people talking to their friends and family—because this is really going to affect everyone. Thank you so much for watching.

Conclusions and Resources

I would genuinely love to hear your thoughts on AI 2027. Do you find it plausible? What do you think was most implausible? And maybe spend a second thinking about a person or two that you know who might find it valuable—maybe your AI-progress-skeptical friend, or your ChatGPT-curious uncle, or maybe your local member of Congress.

United States Push

Post navigation

Previous post
Next post

Follow Us On Google News

  • Many unresolved questions remain as ceasefire remains intact in Gaza
  • Karnataka administration paralyzed due to internal conflict in Congress: Basavaraj Bommai
  • Sabarimala gold plate scam: Crime branch files case against 10 people
  • Strictly stars send message to Stefan Dennis as he is left out of movie night
  • Emmanuel Macron’s political turmoil isn’t just bad news for France
  • Ceasefire fails as US-China trade tensions reach full swing again
  • Rajasthan resident arrested for spying for Pakistan, sent on 3-day police remand
  • Improvement in government hospitals top priority: Delhi Chief Minister Rekha Gupta
  • Cynthia Erivo’s strange decision leaves viewers stunned
  • Can Jaron Ennis replace Terence Crawford as boxing’s pound-for-pound No. 1?
  • Karnataka: BJP criticizes government over rape and murder of balloon selling girl
  • Durgapur gang rape: Medical college supports the victim, will cooperate with the police
  • Gavin Newsom calls Joe Rogan to talk about him and asks for podcast invitation
  • Sigourney Weaver hints at possible Alien return after reading ‘Paranormal’ script
  • Nat Sciver-Brunt celebrates brilliant innings after win over Sri Lanka
  • Singer shot dead before appearing on Mexico’s Strictly Come Dancing
  • WATCH: Don Lemon gets crushed in another “street interview”
  • Amber Davies cries at Strictly rehearsal as she accuses partner Nikita of lying
  • JUST IN: President Trump Gets COVID Booster Shot
  • Disgraced rock star Ian Watkins dies after prison attack
  • Chhattisgarh: Security forces foiled Maoist attack attempt, neutralized three IEDs
  • Nat Sciver-Brunt century propels England to victory over Sri Lanka in Women’s World Cup
  • Wherever Ashok Gehlot campaigned, Congress lost: Rajasthan BJP chief on Bihar elections
  • Tadej Pogačar creates new cycling history with fifth victory in Lombardia
  • Pedophile Lostprophets singer Ian Watkins dies in prison attack
  • ‘No survivors’ after Tennessee explosive plant tragedy, sheriff reveals
  • ‘You are a dark sick person’: NYC Mayor Adams attacks reporter over tell-all book
  • 4 killed and others injured in shooting during homecoming football game in Mississippi
  • Israel’s World Cup qualifier in Norway begins amid heavy security and protests
  • Bihar Elections: Sanjay Jha criticizes Tejashwi Yadav on promise of ‘one job per family’
  • Prince Harry gave new argument on security arrangements
  • Haryana government should give justice to the family of late IPS officer: Punjab CM
  • Vacherot vs Rinderknecht: TV channel, start time and how to watch Shanghai Masters final
  • Jammu and Kashmir
  • World
  • India News
  • Uk
  • Canada
  • United States
  • About Us
  • Contact Us
  • Jammu and Kashmir
  • World
  • India News
  • Uk
  • Canada
  • United States
  • About Us
  • Contact Us

Add thelocalreport.in As A Trusted Source in Google

Canada News

  • How 'OK Blue Jays' became an eternal ballpark tradition in Toronto
    How ‘OK Blue Jays’ became an eternal ballpark tradition in Toronto
  • Durham College student barred from attending convocation because of religious symbol
    Durham College student barred from attending convocation because of religious symbol
  • 'Is that $75 million?': Ontario's biggest Lotto Max winner is in disbelief
    ‘Is that $75 million?’: Ontario’s biggest Lotto Max winner is in disbelief
  • Mug on Center Ice: Story of existence, mental health and redemption of former hockey enformers
    Mug on Center Ice: Story of existence, mental health and redemption of former hockey enformers
  • Students of Durham Kshetra High School speak after canceling Prom
    Students of Durham Kshetra High School speak after canceling Prom
  • Ford rejects the push of Ford mayers to keep speed cameras in Ontario
    Ford rejects the push of Ford mayers to keep speed cameras in Ontario

India News

  • Karnataka administration paralyzed due to internal conflict in Congress: Basavaraj Bommai
    Karnataka administration paralyzed due to internal conflict in Congress: Basavaraj Bommai
  • Sabarimala gold plate scam: Crime branch files case against 10 people
    Sabarimala gold plate scam: Crime branch files case against 10 people
  • Rajasthan resident arrested for spying for Pakistan, sent on 3-day police remand
    Rajasthan resident arrested for spying for Pakistan, sent on 3-day police remand
  • Improvement in government hospitals top priority: Delhi Chief Minister Rekha Gupta
    Improvement in government hospitals top priority: Delhi Chief Minister Rekha Gupta
  • Karnataka: BJP criticizes government over rape and murder of balloon selling girl
    Karnataka: BJP criticizes government over rape and murder of balloon selling girl
  • Durgapur gang rape: Medical college supports the victim, will cooperate with the police
    Durgapur gang rape: Medical college supports the victim, will cooperate with the police

Us News

  • WATCH: Don Lemon gets crushed in another “street interview”
  • JUST IN: President Trump Gets COVID Booster Shot
  • US national debt approaches $38 trillion, Bitcoin surge as global investors flee dollar collapse
  • They’re Hiding the Light: How Big Pharma and Governments Keep You Sick by Blocking Natural Healing
  • President Trump adopts soft stance, says meeting with Xi ‘not cancelled’
  • BREAKING: President Trump Imposes 100% Tariffs on China, Crypto Market Tanks!

Uk News

  • Many unresolved questions remain as ceasefire remains intact in Gaza
    Many unresolved questions remain as ceasefire remains intact in Gaza
  • Strictly stars send message to Stefan Dennis as he is left out of movie night
    Strictly stars send message to Stefan Dennis as he is left out of movie night
  • Emmanuel Macron's political turmoil isn't just bad news for France
    Emmanuel Macron’s political turmoil isn’t just bad news for France
  • Ceasefire fails as US-China trade tensions reach full swing again
    Ceasefire fails as US-China trade tensions reach full swing again
  • Cynthia Erivo's strange decision leaves viewers stunned
    Cynthia Erivo’s strange decision leaves viewers stunned
  • Can Jaron Ennis replace Terence Crawford as boxing's pound-for-pound No. 1?
    Can Jaron Ennis replace Terence Crawford as boxing’s pound-for-pound No. 1?
  • World
  • United States
  • India News
  • Uk
  • Canada
  • thelocalreport.in Company Details
  • Terms and Conditions
  • DNPA Code of Ethics
  • Correction Policy
  • Contact Us
  • About Us
  • Rss Feeds
©2025 thelocalreport.in | WordPress Theme by SuperbThemes