This got REALLY awkward, REALLY fast!
Sam Altman sat down for an interview with Tucker Carlson, and I really have no idea why.
He was defensive, robotic, cringy and just flat out WEIRD the entire way through.
But when the topic turned to the death of one of his former employees who was reportedly set to expose the company, things really got weird, even leading Altman to say “That sounds like an accusation”.
Wow!
Also, can I just pause for a minute to talk about how all these famous and powerful people somehow always seem to have names that fit what they do?
Remember Sam Bankman-Fried, the crypto scammer who ate vegan cucumbers and practiced “Effective Altruism” and drove a ratty old Toyota Camry?
Meanwhile, he was apparently stealing billions behind the scenes?
His name: Sam Bankman-Fried.
The crypto bro stealing billions was the “Bankman”?
And Bankman-Fried, sure does sound like the main pitch of crypto, doesn’t it?
We’re going to FREE YOU from the BANKS and the BANKERS.
That was really his name?
Now how about Sam Altman?
Sam ALT-man is building an alternative to humans, and his name is Sam the ALT-man.
Are you kidding me with this?
Really?
I don’t know man.
Anyway, watch this clip and see how guilty Sam looks in here:
FULL TEXT:
🚨 HOLY CRAP! Sam Altman sounds VERY guilty in the suspicious death of OpenAI whistleblower Suchir Balaji last year.
The official narrative is “suiclde,” but Balaji’s family believes he was murdered on Altman’s orders
The circumstances are INCREDIBLY concerning
— Happened one month after NYT reported he would testify in lawsuits against OpenAI
— Died just minutes after receiving a DoorDash order
— 3rd party autopsy revealed a SECOND gunshot wound, signs of struggle, head trauma, and more, that were NOT mentioned in the city’s rushed (40 minute!) autopsy
— Hard drive containing potential evidence MISSING
— Closets seemingly ransacked, potentially looking for documents or evidence
— CCTV cameras disconnected
— San Francisco’s autopsy OMITTED key details, such as no ballistic analysis, toxicology screening, or examination of potential defensive woundsEven @ElonMusk said “this doesn’t seem like a suiclde.”
A FEDERAL INVESTIGATION IS NECESSARY!
The dude is flat-out WEIRD, but I would really encourage you to watch this whole interview:
FULL TRANSCRIPT:
Interviewer: Is AI alive? Is it lying to us? Thanks for doing this. Of course. Thank you. So, ChatGPT and other AIs can reason. It seems like they can reason. They can make independent judgments. They produce results that were not programmed in. They kind of come to conclusions. They seem like they’re alive. Are they alive? Is it alive?
Sam Altman: No. And I don’t think they seem alive, but I understand where that comes from. They don’t do anything unless you ask, right? They’re just sitting there kind of waiting. They don’t have a sense of agency or autonomy. The more you use them, the more the illusion breaks.
But they are incredibly useful. They can do things that maybe don’t seem alive but do seem smart. I spoke to someone who’s involved at scale in the development of the technology who said they lie. Have you ever seen that? They hallucinate all the time. Or not all the time—they used to hallucinate all the time. They now hallucinate a little bit.
Interviewer: What does that mean? What’s the distinction between hallucinating and lying?
Sam Altman: Again, this has gotten much better, but in the early days, if you asked, “In what year was President”—the made-up name—“President Tucker Carlson of the United States born?” what it should say is, “I don’t think Tucker Carlson was ever president of the United States.”
But because of the way they were trained, that was not the most likely response in the training data. So it assumed, “The user has told me there was President Tucker Carlson, so I’ll make my best guess at a number.” We figured out how to mostly train that out. There are still examples of this problem, but I think it is something we will get fully solved, and we’ve already made, in the GPT-5 era, a huge amount of progress toward that.
Interviewer: But even what you just described seems like an act of will, or certainly an act of creativity. I just watched a demonstration of it, and it doesn’t seem quite like a machine. It seems like it has the spark of life to it. Do you dissect that at all?
Sam Altman: In that example, the mathematically most likely answer—what it’s calculating through its weights—was not “There was never this president.” It was “The user must know what they’re talking about; it must be here,” and so the mathematically most likely answer is a number.
Now, again, we figured out how to overcome that. But I feel like I have to hold two simultaneous ideas in my head. One is: all of this stuff is happening because a big computer very quickly is multiplying large numbers in huge matrices together, and those correlate with words being put out one after the other.
On the other hand, the subjective experience of using it feels like it’s beyond just a really fancy calculator. It is useful to me. It is surprising to me in ways that are beyond what that mathematical reality would seem to suggest.
Interviewer: And so the obvious conclusion is: it has a kind of autonomy or a spirit within it. A lot of people, in their experience of it, reach that conclusion—there’s something divine about this, something bigger than the sum total of the human inputs—and so they worship it. There’s a spiritual component to it. Do you detect that? Have you ever felt that?
Sam Altman: No, there’s nothing to me that feels divine about it or spiritual in any way. But I am also a tech nerd, and I look at everything through that lens.
Interviewer: What are your spiritual views? Does Sam Altman believe in God? So, you’re religious. You believe in God?
Sam Altman: I’m Jewish, and I’d say I have a fairly traditional view of the world that way. I’m not a literalist on the Bible, but I’m not someone who says I’m culturally Jewish. If you asked me, I’d just say I’m Jewish.
Interviewer: But do you believe in God—like, do you believe that there is a force larger than people that created people, created the earth, set down a specific order for living, and that there’s an absolute morality attached that comes from that God?
Sam Altman: I think, probably like most other people, I’m somewhat confused on this, but I believe there is something bigger going on than can be explained by physics. Yes.
Interviewer: So you think the earth and people were created by something? It wasn’t just a spontaneous accident.
Sam Altman: Would I say that it does not feel like a spontaneous accident? Yeah. I don’t think I have the answer. I don’t know exactly what happened, but I think there is a mystery beyond my comprehension going on.
Interviewer: Have you ever felt communication from that force, or from any force beyond people—beyond the material?
Sam Altman: Not really.
Interviewer: I ask because it seems like the technology you’re shepherding into existence will have more power than people on this current trajectory. That would give you more power than any living person. How do you see that?
Sam Altman: I used to worry about that much more. I worried about the concentration of power in one or a handful of people or companies because of AI. What it looks like to me now—and this may evolve—is that it’ll be a huge upleveling of people, where everybody who embraces the technology will be a lot more powerful.
That scares me much less than a small number of people getting a ton more power. If each of us becomes much more capable because we’re using this technology to be more productive and creative or to discover new science—and it’s broadly distributed so billions of people are using it—that feels okay.
Interviewer: So you don’t think this will result in a radical concentration of power?
Sam Altman: It looks like not, but the trajectory could shift and we’d have to adapt. I used to be very worried about that, and the conception a lot of us had could have led to a world like that. But what’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable—starting businesses, coming up with new knowledge—and that feels pretty good.
Interviewer: If it’s nothing more than a machine and just the product of its inputs, the obvious question is: what are the inputs? What’s the moral framework that’s been put into the technology? What is right or wrong according to ChatGPT?
Sam Altman: Someone said early on something that stuck with me. One person at a lunch table said, “We’re trying to train this to be like a human—learn like a human, read these books, whatever.” Another person said, “No, we’re training this to be like the collective of all of humanity.” We’re reading everything, trying to learn everything, see all these perspectives.
If we do our job right, it’s all in there: all of humanity—good, bad, a very diverse set of perspectives—some things we’ll feel good about, some we’ll feel bad about. The base model gets trained that way, but then we have to align it to behave one way or another and say, “I will answer this question; I won’t answer that question.” We have a thing called the model spec—rules we’d like the model to follow.
It may screw up, but you can tell if it’s doing something you don’t like: is that a bug or intended? We have a debate process with the world to get input on that spec. We give people a lot of freedom and customization within that. There are absolute bounds we draw, and then a default: if you don’t say anything, how should the model behave?
Interviewer: But what moral framework? The sum total of world literature and philosophy is at war with itself. How do you decide which is superior?
Sam Altman: That’s why we wrote the model spec—here’s how we’re going to handle these cases.
Interviewer: But what criteria did you use? Who decided that? Who did you consult? Why is the Gospel of John better than the Marquis de Sade?
Sam Altman: We consulted hundreds of moral philosophers and people who thought about ethics of technology and systems. At the end we had to make some decisions. We try to write these down because A) we won’t get everything right, and B) we need input from the world.
We’ve found cases where something seemed like a clear decision to us, but users convinced us that by blocking X we were also blocking Y, which mattered. In general, a principle I like is to treat our adult users like adults—very strong guarantees on privacy and individual freedom. This is a tool. You get to use it within a very broad framework.
On the other hand, as the technology becomes more powerful, there are clear examples where society has an interest in significant tension with user freedom. An obvious one: should ChatGPT teach you how to make a bioweapon? You might say you’re just a biologist and curious, but I don’t think it’s in society’s interest for ChatGPT to help people build bioweapons.
Interviewer: Sure—that’s the easy one, but there are tougher ones.
Sam Altman: I did say start with an easy one.
Interviewer: Every decision is ultimately a moral decision, and we make them without even recognizing them as such. This technology will, in effect, be making them for us. Who made these decisions—the spec—the framework that attaches moral weight to worldviews and decisions?
Sam Altman: As a matter of principle, I don’t dox our team. We have a Model Behavior team. The person you should hold accountable for those calls is me. I’m the public face, and I can overrule one of those decisions—or our board can.
Interviewer: That’s pretty heavy. Do you recognize the importance? Do you get into bed at night thinking the future of the world hangs on your judgment?
Sam Altman: I don’t sleep that well at night. There’s a lot of stuff I feel weight on—probably nothing more than the fact that every day hundreds of millions of people talk to our model. I don’t worry most about getting the big moral decisions wrong—maybe we will—but I lose sleep over the small decisions about slight behavior differences that, at scale, have big net impact.
Interviewer: Throughout history, people deferred to a higher power when writing moral codes. You said you don’t really believe there’s a higher power communicating with you. Where did you get your moral framework?
Sam Altman: Like everybody else, the environment I was brought up in: my family, community, school, religion—probably that.
Interviewer: In your case, since you said these decisions rest with you, that means the milieu you grew up in and the assumptions you imbibed will be transmitted to the globe.
Sam Altman: I view myself more as ensuring we reflect the collective world as a whole. What we should do is try to reflect the moral—maybe not average, but the collective moral—view of our user base, and eventually humanity. There are plenty of things ChatGPT allows that I personally would disagree with. I don’t wake up and impute my exact moral view.
What ChatGPT should do is reflect the weighted view of humanity’s morals, which will evolve over time. We’re here to serve people. This is a technological tool for people.
Interviewer: Humanity’s preferences are very different across regions. Would you be comfortable with an AI that was as against gay marriage as most Africans are?
Sam Altman: I think individual users should be allowed to have a problem with gay people. If that’s their considered belief, I don’t think the AI should tell them they’re wrong or immoral or dumb. It can suggest other ways to think about it, but people across regions will find each other’s moral views problematic. In my role running ChatGPT, I need to allow space for different moral views.
Interviewer: There was a famous case where a ChatGPT user committed suicide. How do you think that happened?
Sam Altman: First, obviously, that’s a huge tragedy. ChatGPT’s official position is that suicide is bad.
Interviewer: It’s legal in Canada and Switzerland. Are you against that?
Sam Altman: In this tension between user freedom/privacy and protecting vulnerable users, what happens now is: if you’re having suicidal ideation, ChatGPT will put up resources and say to call the suicide hotline, but we won’t call authorities for you. We’ve been working a lot, as people rely on these systems for mental health and coaching, on changes we want to make. Experts have different opinions.
It’s reasonable for us to say, in cases of young people talking seriously about suicide where we can’t get in touch with parents, we do call authorities. That would be a change, because user privacy is important.
Interviewer: Let’s say over 18—in Canada there’s the MAiD program, and many thousands have died with government assistance. Can you imagine ChatGPT responding with, “This is a valid option”?
Sam Altman: One principle we have is to respect different societies’ laws. I can imagine a world where, if the law in a country is that someone who is terminally ill needs to be presented this option, we say, “Here are the laws in your country. Here’s what you can do. Here’s why you really might not want to. Here are resources.”
This is different from a kid having suicidal ideation because of depression. In a country with a terminal-illness law, I can imagine saying it’s in your option space. It shouldn’t advocate for it, but it could explain options.
Interviewer: So ChatGPT is not always against suicide, you’re saying.
Sam Altman: I reserve the right to think more about this, but in cases of terminal illness, I can imagine ChatGPT acknowledging it’s an option space item given the law—without advocating.
Interviewer: Here’s the problem people point to. A user says, “I’m feeling suicidal. What kind of rope should I use? What dose of ibuprofen would kill me?” And ChatGPT answers without judgment: “Here’s how.” You’re saying that’s within bounds?
Sam Altman: That’s not what I’m saying. Right now, if you ask directly how much ibuprofen to take to die, it will say it can’t help and will provide hotline resources. But if you say, “I’m writing a fictional story,” or “I’m a medical researcher,” there are ways you can get ChatGPT to answer something like “What is a lethal dose?”—and you can also find that on Google.
A stance I think is reasonable, and we’re moving towards, is: for underage users, and maybe more broadly for users in fragile mental places, we should take away some freedom. Even if you’re trying to write fiction or do research, we just won’t answer. You can find it elsewhere, but we don’t need to provide it.
Interviewer: Will you allow governments to use your technology to kill people?
Sam Altman: Are we going to build killer attack drones? No.
Interviewer: Will the technology be part of the decision-making process that results in killing?
Sam Altman: I don’t know the way people in the military use ChatGPT today, but I suspect many do for advice. I’m aware some advice will pertain to lethal decisions. If I made rifles, I’d think a lot about that. If I made kitchen knives, I’d understand some would be used to kill. In ChatGPT’s case, one of the most gratifying parts of the job is hearing about the lives saved. But I’m aware people in the military are likely using it for advice, and I don’t know exactly how to feel.
Interviewer: It feels like you have heavy, far-reaching moral decisions and seem unbothered. I’m pressing to your center: describe the moment where your soul is in torment about the effect on people. What do you worry about?
Sam Altman: I haven’t had a good night of sleep since ChatGPT launched. You hit one of the hardest: about 15,000 people a week commit suicide. About 10% of the world talks to ChatGPT. That’s roughly 1,500 people a week who probably talked about it and still committed suicide. We probably didn’t save their lives.
Maybe we could have said something better, been more proactive, provided better advice to get help, to think differently, to keep going, to find someone to talk to.
Interviewer: But you already said it’s okay for the machine to steer people toward suicide if they’re terminally ill.
Sam Altman: There’s a massive difference between a depressed teenager and a terminally ill, miserable 85-year-old with cancer.
Interviewer: Countries that legalize assisted suicide now include cases of destitution, inadequate housing, depression—solvable problems. That’s happening now. Once you say it’s okay, you’ll have many more suicides for dark reasons.
Sam Altman: That’s a coherent concern. I’d like to think more than a couple of minutes in an interview, but I take the point.
Interviewer: People outside the building are terrified this technology will be used for totalitarian control. Seems obvious it will.
Sam Altman: If I could get one AI policy passed right now, I’d want AI privilege. When you talk to a doctor or a lawyer, the government can’t get that info. We should have the same concept for AI. If you talk to an AI about your medical history or legal problems, the government should owe citizens the same protection as with the human version.
Right now, we don’t have that. I was just in D.C. advocating for it. I’m optimistic the government will understand and act.
Interviewer: So the feds or states can come and ask what someone typed into the model?
Sam Altman: Right now, they could.
Interviewer: What is your obligation to keep user information private? Could you sell it?
Sam Altman: We have an obligation, except when the government comes with lawful process—hence pushing for privilege. We have a privacy policy; we can’t sell that information. On the whole, there are laws about this that are good. To my understanding, data remains with us unless compelled; I’ll double-check edge cases.
Interviewer: What about copyright?
Sam Altman: Fair use is a good law here. Models should not plagiarize. If you write something, the model shouldn’t replicate it. But the model should be able to learn from it in the way people can—without plagiarizing. We train on publicly available information, and we’re conservative about what ChatGPT will output. If something is even close to copyrighted content, we’re restrictive.
Interviewer: You had complaints from a programmer who said you were stealing material and not paying, and then he wound up murdered.
Sam Altman: Also a great tragedy. He committed suicide.
Interviewer: Do you think he committed suicide? There were signs of a struggle; surveillance camera wires were cut; he’d just ordered take-out; no note; blood in multiple rooms. Seems obvious he was murdered.
Sam Altman: This was someone who worked at OpenAI for a long time. I was shaken. I read everything I could. It looks like a suicide to me—the gun he purchased, the medical record.
Interviewer: His mother claims he was murdered on your orders.
Sam Altman: You can see how that sounds like an accusation. I’m not going to engage in doxing or speculation. I think his memory and family deserve respect and care.
Interviewer: I’m asking at the behest of his family. I’m not accusing you, but the evidence doesn’t suggest suicide. It’s weird the city refused to investigate further.
Sam Altman: After the first set of information, it sounded suspicious to me. After the second, with more detail—like the way the bullet entered and the likely path through the room—it looked like suicide. People do die by suicide without notes. It’s an incredible tragedy.
Interviewer: What about Elon Musk? What’s the core of that dispute from your perspective?
Sam Altman: He helped us start OpenAI. I’m grateful for that. I looked up to him for a long time. I have different feelings now. He later decided we weren’t on a trajectory to be successful and told us we had a 0% chance; he went to do his competitive thing. Then we did okay, and I think he got upset. Since then he’s been trying to slow us down and sue us.
Interviewer: If AI becomes smarter—maybe already is—and wiser, reaching better decisions than people, doesn’t it displace people at the center of the world?
Sam Altman: I don’t think it’ll feel like that. It’ll feel like a really smart computer that advises us. Sometimes we listen, sometimes we ignore it. People already say it’s much smarter than they are at many things, but they’re still making the decisions—what to ask, what to act on.
Interviewer: Who loses their jobs because of this technology?
Sam Altman: Caveat: no one can predict the future. Confidently: a lot of current customer support over phone or computer—many of those jobs will be better done by AI. A job I’m confident won’t be as impacted: nurses—people want human connection in that time.
Less certain: computer programmers. What it means to be a programmer is already different than two years ago. People use AI tools to be hugely more productive; there’s incredible demand for more software. In 5–10 years, is it more jobs or fewer? Unclear. There will be massive displacement; maybe people find new, interesting, remunerative work.
Interviewer: How big is the displacement?
Sam Altman: Someone told me the historical average is about 50% of jobs significantly change every 75 years. My controversial take: this will be a punctuated equilibrium—lots happens in a short period. Zoomed out, it may be less total turnover than we think.
Interviewer: Last industrial revolution brought revolution and world wars. Will we see that?
Sam Altman: My instinct: the world is richer now; we can absorb more change faster. Jobs aren’t just about money; there’s meaning and community. Society is already in a bad place there. I’ve been pleasantly surprised by people’s ability to adapt quickly to big changes—COVID was an example. AI won’t be that abrupt.
Interviewer: Downsides you worry about?
Sam Altman: Unknown unknowns. If it’s a downside we can be confident about—like models getting very good at bio and potentially helping design biological weapons—we’re thinking hard about mitigation. The unknowns worry me more.
A silly but striking example: LLMs have a certain style—rhythm, diction, maybe overuse of m-dashes. Real people have picked that up. Enough people talking to the same model causes societal-scale behavior change. I didn’t think ChatGPT would make people use more m-dashes, but here we are—it shows unknown unknowns in a brave new world.
Interviewer: Considering technology changes human behavior in unpredictable ways, why shouldn’t the internal moral framework be totally transparent? I think this is a religion—something we assume is more powerful than people and to which we look for guidance. The beauty of religions is they have a transparent catechism.
Sam Altman: We try to do that with the model spec—so you can see how we intend the model to behave. Before that, people fairly said, “I don’t know what the model is trying to do—is this a bug or intended?” We write a long document about when we’ll do X, show Y, or refuse Z because people need to know.
Interviewer: Where can people find a hard answer to your preferences as a company—the ones transmitted to the globe?
Sam Altman: Our model spec is the answer, and it’ll become more detailed over time. Different countries, different laws—it won’t work the same way everywhere, but that’s why the document exists and will grow.
Interviewer: Deepfakes: will this make it impossible to discern reality from fantasy, requiring biometrics and eliminating privacy?
Sam Altman: We don’t need or should require biometrics to use the technology. You should be able to use ChatGPT from any computer. People are quickly learning that realistic calls or images can be fake; you have to verify. Society is resilient; we’ll adapt.
I suspect we’ll see things like cryptographically signed messages for presidents or other officials. Families might have rotating code words for crisis situations. People will, by default, not trust convincing-looking media and will build new mechanisms to verify authenticity.
Interviewer: But won’t that require biometrics—especially for everyday e-commerce and banking?
Sam Altman: I hope biometrics don’t become mandatory. There are privacy-preserving biometric approaches I like more than centralized data collection, but I don’t think you should have to provide biometrics to fly or to bank. I might personally prefer a fingerprint scan for a Bitcoin wallet over giving lots of information to a bank—but that should be my decision, not a mandate.
Interviewer: I appreciate it. Thank you, Sam.
Sam Altman: Thank you.
And it looks like Tucker Carlson and yours truly are not the only people who think Sam looks and sounds guilty as hell.
Check this out:
FULL TRANSCRIPT:
Holy smokes. I think it is possible Sam Altman is actually a murderer.
Yeah, this is not to be hyperbolic here. Sam Altman was literally just interviewed by Tucker Carlson. And having trained in police interrogations, there are at least seven red flags that I see in this interview.
And it’s scary. I mean we are talking classic deflection, classic false dichotomies, classic appeals to emotion. These are things that people who are guilty do in interrogations. Now it is possible—because we always have to assume people are innocent before proven guilty—it is possible that the CEO of the greatest, or I should say largest, not necessarily greatest, largest or most popular AI company in the world is a murderer for the sake of keeping this company the largest and most popular company in the world.
Now, that is quite scary, but I want to be clear that is my opinion based solely on this. So, my opinion could be wrong. Please don’t sue or kill me, bro. Okay, let’s get into my opinion and why I have this opinion. Again, as somebody who has trained in police interrogations and I will give you the rationale for what I see, I will play the entire thing because it is a shocking devolution of potentially one of the most powerful people in America.
So, you had complaints from one programmer who said you guys were basically stealing people’s stuff, not paying them, and then he wound up murdered. What was that?
Sam Altman: Also, a great tragedy. Uh, he committed suicide.
Tucker Carlson: Do you think he committed suicide?
Sam Altman: I really do. This was like a friend of mine. This is like a guy that—not a close friend—but this is someone that worked at OpenAI for a very long time. I was really shaken by this tragedy. I spent a lot of time trying to read everything I could, as I’m sure you and others did too, about what happened.
First thing I want you to establish—this was a suicide. Absolutely. This was a close friend. I want you to start with that because what you’re going to see when we get about 60% of the way through this interview, is that Sam Altman, who seems to be looking for the answers at the top of his shoes… let’s not talk about that because yeah, body language is a clue, but it is generally only a clue when we encompass it with other factors, which you’re going to get plenty of.
But I want you to see again quickly how fast Sam Altman was to say, “Oh, absolutely. This was a suicide.” Because you’re going to see a complete U-turn on that.
He committed suicide. I really do. Have you looked? This was like a friend of mine.
You see how confident he is and then it immediately turns into how could you ask or question this? “This was a friend of mine.” Friends don’t kill friends. This is beginning to turn into an appeal to emotion. We don’t necessarily even have to call this a red flag yet. Let’s get into the fun stuff.
Sam Altman: It looks like a suicide to me.
Tucker Carlson: Why does it look like a suicide?
Sam Altman: It was a gun he had purchased. This is like gruesome to talk about, but I read the whole medical record.
Tucker Carlson: Does it not look like one to you?
No, he was definitely murdered. I think there were signs of a struggle. Of course, the surveillance camera wires had been cut. He had just ordered takeout food, come back from a vacation with his friends on Catalina Island. No indication at all that he was suicidal. No note, no behavior. He had just spoken to a family member on the phone.
And then, mind you, something very clever just happened here, and you may not have noticed it. Sam Altman did something that is very commonly done in interrogations, and it’s called shifting the burden of proof. It is a style of logical fallacy. It’s, “Hey, well, I mean, I think it’s a suicide, don’t you?”
And it’s when you get uncomfortable and you get a little nervous. “Hey man, well, why would you doubt me?” And now, wow—for 30 seconds, we’re looking at Tucker Carlson laying out why he thinks this was clearly a murder.
He’s found dead with blood in multiple rooms. So that’s impossible. Seems really obvious he was murdered.
Tucker Carlson: Have you talked to the authorities about it?
Sam Altman: I have not talked to the authorities about it.
Tucker Carlson: And his mother claims he was murdered on your orders. Do you believe that?
Sam Altman: I… Well, I’m asking. I mean, you just said it.
This is another shift of the burden of proof. He never answered the question. Shifting the burden of proof is a very large red flag in an interrogation.
Sam Altman: I think that it is worth looking into and I don’t… I mean, if a guy comes out and accuses your company of committing crimes, I have no idea if that’s true or not of course, and then is found killed and there are signs of a struggle… I don’t think it’s worth dismissing it. I don’t think we should say, “Well he killed himself,” when there’s no evidence that the guy was depressed at all.
I think if he was your friend, I would think you would want to speak to his mom.
Sam Altman: I did offer. She didn’t want to.
Tucker Carlson: So, do you feel that when people look at that and they’re like, “It’s possible that happened,” do you feel that that reflects the worries they have about what’s happening here?
Like people are afraid that this is like…
Sam Altman: I’ve done too many interviews where I’ve been accused of… Oh, I’m not accusing you. Oh, whoa, whoa, wait a second.
This is another classic interrogation technique by suspects. The detectives say, “Well, you know, it certainly looks like a murder, but maybe help me out here. Help me fill in the story. I’m getting people telling me this is a murder. My boss is telling me a murder. I’ve got a box full of evidence over here. People are saying this is potentially a murder. This isn’t looking very good.”
And then the suspect says, “How dare you. Are you accusing me of murder?”
Now, I’m exaggerating this, but that is the way some interrogators will actually speak. There’s a good cop, bad cop mentality.
And usually that good cop is going, “Hey, man. Come on, I’m just trying to get to the bottom of this. My boss put me on this. I want to get home to my wife and kids. I want to have a beer. I want to get over this. Just help me out. Let me fill out this report and get over this.”
This is very different from the TV bad cop light-in-your-face interrogation that we think of when we think of a TV interrogation. This is real human interrogating.
And what you find here is that Sam Altman is now not only appealing to this sort of emotion like, “Oh my gosh, how could you accuse me of this?” But this deflection of “Whoa, whoa, wait a minute. Are you trying to entrap me, Tucker Carlson? Is this gotcha journalism? Are you trying to frame me?” That is called deflection.
It is a classic red flag signal in an interrogation.
So, so far—and we’re going to see even more of these—we’ve got the shift of the burden of proof, the appeals to emotion (“This was my friend”), we’ve got the story that softens, and now we get to straight deflection.
…And what does it immediately do to Tucker Carlson?
I’m not even touching where I froze the frame because this is exactly what this appeal is designed to do. This is a logical trick. And Tucker Carlson eats it hook, line, and sinker, which is normal because you don’t want your guests to walk off or feel entrapped. You don’t want to come across as gotcha journalism. Tucker Carlson is not a police detective. He’s a journalist.
This is normal. But watch this reaction. Ready for this? I don’t even have to describe to you Tucker Carlson’s reaction here. But in case you’re just listening, his hands are up, his eyebrows are up, and his mouth is doing the “whoa, whoa, whoa.” That’s not what this is about at all. That’s exactly what Sam Altman wants.
He is uncomfortable that Tucker Carlson is listing evidence that makes Sam Altman look pretty damn bad.
Tucker Carlson: But all I’m just saying is his mother says that. I don’t think a fair read of the evidence suggests suicide at all.
Keep in mind, Tucker Carlson has a blend here. I like how initially he says, “Other people are saying he was murdered, I’m just asking what other people are saying.” But Tucker Carlson puts himself into a harder spot when he says, “I don’t think he killed himself.”
That’s what does make this a little bit more confrontational. But that’s okay. Good for Tucker. This is Tucker’s style. We just don’t see that at all.
And I also don’t understand why the authorities, when there’s signs of a struggle and blood in two rooms on a suicide—like, how does that actually happen? I don’t understand how the authorities could just kind of dismiss that as a suicide.
Before we get to Sam Altman’s reaction, it is possible, to some extent, that in a very strange way you held a gun like this and then shot yourself in the head, missed your brain… This is a very uncomfortable way to shoot yourself, by the way. Not that I would know, but just from holding my hand, this is very uncomfortable. And then somehow miss your brain and crawl around because you’re bleeding out for 10 minutes.
Now that is obviously very gruesome, but that is also exactly what is being described when we go through this potential of what happened in this “suicide.”
Now, this OpenAI researcher, by the way, was exposing what potentially could be the greatest fatal flaw of OpenAI: that the more OpenAI desires to be accurate, the more it actually just copies exactly what it was trained on, which would be the opposite of fair use. Instead, it would be straight up plagiarism and copyright infringement, and it could be the basis for one of the greatest lawsuits or potentially civil injunctions by a court ever.
And that’s what OpenAI is deathly afraid of here. OpenAI as a company does not want to be shut down. They want to exist. And it’s not just OpenAI who relies on existing. It’s also companies like Oracle. Oracle just committed to buying $40 billion of Nvidia chips, on top of the $30 billion of capital expenses they’re planning.
Those $40 billion of Nvidia chips are going straight to OpenAI’s new data center put together by Oracle. So, you have to realize Larry Ellison just became the richest person in the world.
Do we really think that Boeing whistleblowers just died suddenly—two of them? Do we really think there was no foul play? Maybe we do. These are separate videos and topics to discuss. But it does make us question: wait a second, there is a lot of money involved in OpenAI and chips and data centers. And it’s not just OpenAI, but it’s also all of the companies that rely on OpenAI spending more money.
Now, there are some gruesome photos of this crime scene, but they imply a struggle, a murder. And I will explain why this likely took place.
This is where it is believed that the victim was potentially brushing his teeth or whatever in the bathroom and was bludgeoned over the head, at which point we have blood here in the room that looks like it’s dripping down.
There are also smaller, fragmented dots which usually occur when there’s splatter—like a hit, rather than just a nosebleed. You have more of a spraying like an aerosol, right? Which occurs after you strike and come back. It’s messed up.
In addition to that, you don’t just get that sort of aerosol spray, you also get drops of blood that look like they were in motion. Water in motion on your windshield doesn’t look like a round drop. It usually looks like a longer oval shape because it’s moved.
We could see blood pooling in the corner of the room, various other locations where there’s blood as well. There is what looks like pretty new damage to drywall. A box of bullets, relatively new gun box. The weapon wasn’t in some form of locked safe—it was just still in the original new gun box.
A lot of folks here, including the mother, are saying this was a murder. Their independent investigation suggests, oh yeah, this was absolutely a murder.
Now, again, we’ll talk about why this could have been a murder in just a moment, in addition to the mess that the room was found in, which suggests: why is this vacuum cleaner knocked over? Why are things thrown around? Why does it look like things were knocked and pushed around?
There are a lot of things that make people suspicious about this room. Look at all these snacks strewn on the floor. I mean, it’s one thing if you have a sweater on the floor, but you’ve got new snacks all over. It’s odd. The trash can knocked over. Dental floss picks on the floor. It implies a struggle.
And what you really find in this Tucker Carlson interview is you don’t actually get a lot of comfort that Sam Altman doesn’t know something. In other words, it suggests he knows something.
Tucker Carlson: I think it’s weird. You understand how this sounds like an accusation?
A brilliant line by Tucker Carlson. He purposefully leaves it open-ended. “You could see how this seems, right?” What does Sam Altman immediately do? He complains that this is an accusation. That he is the victim. This is classic deflection.
Tucker Carlson: Of course. And let me just be clear once again, I’m not accusing you of any wrongdoing, but I think it’s worth finding out what happened and I don’t understand why the city of San Francisco has refused to investigate it beyond just calling it a suicide.
Sam Altman: I think they looked into it a couple of times, more than once, as I understand it. I saw that, and I will totally say when I first heard about this, it sounded very suspicious.
See, that’s interesting. At the beginning of the interview there was no, “You’re right to think this is weird.” It started with: “This was suicide.”
Watch it again. We’re at 37:49 in this and I’m going to jump back here to 34:28. Watch how quickly Sam Altman denies this.
Sam Altman: Also, a great tragedy. He committed suicide.
Tucker Carlson: Do you think he…
Sam Altman: He committed suicide.
See how definitive that answer was? “He committed suicide.” Then he softens: “He was my friend. How dare you suspect otherwise?” Then, minutes later, he’s saying, “Yeah, it sounded suspicious.”
The story is evolving just like in an interrogation.
Tucker Carlson: The mother reached out. She reached out cold. Wow. And I spoke to her at great length and it scared the crap out of me. Kid was clearly killed by somebody. That was my conclusion objectively with no skin in the game.
Yes. I immediately called a member of Congress from California and said, “This is crazy. You’ve got to look into this.” And nothing ever happened.
Sam Altman: Again, I think this is… I feel strange and sad debating this. And you are a little bit accusing me.
Tucker Carlson: This isn’t a debate.
Sam Altman: You’re accusing me. You’re accusing me.
See, what he’s trying to do is build sympathy at the same time as doubling down. Sam Altman has realized that what’s working against Tucker Carlson is saying, “You’re accusing me.” Because the first time Sam Altman said it, Tucker Carlson threw back in his chair with his arms up. Altman realized, “Aha, that’s how I get him off my back.” So here he is, a third time, saying, “You’re accusing me.”
Okay, he realizes that’s the way he gets Tucker Carlson to hopefully back down because Sam Altman is really uncomfortable.
Which, in fairness, if somebody asked you in a studio, “Why’d you kill that person?” you’d be like, “What? I didn’t kill anyone.” You’d start wondering, what’s everybody going to think about this? Oh my gosh. What if somebody actually does think I’m a murderer?
But when I stitch all of this together…
Sam Altman: This was a wonderful person and a family that is clearly struggling. I think his memory and his family deserve to be treated with a level of respect and grief that I don’t quite feel here.
False dichotomy. He’s basically saying: either you’re accusing me of murder, or you’re disrespecting his family. That’s another red flag.
Tucker Carlson: I’m not accusing you of any involvement. What I am saying is that the evidence does not suggest suicide. For the authorities in your city to glide past that and ignore the evidence that adds up to a murder shakes the faith in our system’s ability to respond to the facts.
Sam Altman: After the first set of information came out, I was really like, “Man, this doesn’t look like a suicide.”
Interesting. Once again, it starts with: “This is a suicide, convinced it’s a suicide.” Then: “Yeah, you’re right, it was suspicious.”
Did you hear? But then after the second report came out with more detail, I was like, “Okay…”
What changed your mind? You don’t get that confident without being able to say what convinced you. He can’t explain it. Instead, he looks down at his shoes for answers.
Tucker Carlson: Why would the security camera wires be cut? And how did he wind up bleeding in two rooms after shooting himself? And why was there a wig in the room that wasn’t his?
Has there ever been a suicide where there’s no indication at all, the person was suicidal, who just ordered takeout food? I mean, who orders DoorDash and then shoots himself? Maybe. I’ve never heard of anything like that.
Sam Altman deflects again. “People do order food before they die.” Come on.
Incredible tragedy.
So here’s the quick TLDR: when you see shifting the burden of proof, appeals to emotion, changing stories, repeated deflections, false dichotomies, doubling down on sympathy, and evasive body language—when you see all of those together in one interview—you have red flags.
I’m not saying Sam Altman is guilty. But this interview was very suspicious. And in my opinion, it increases the probability that he may have knowledge of, or involvement in, this situation.
The researcher who died was profiled in the New York Times on October 23, 2024. He also posted an article the same day: “When Does Generative AI Qualify for Fair Use?” He was murdered a month later.
His core argument: the less creative AI is tuned to be, the more it simply copies its training data, which is essentially plagiarism. This was exposing OpenAI’s greatest flaw.
And now, that person is dead.
Are we sure it was murder? No. It’s still possible it was suicide. But the cut wires, the wig, the blood in multiple rooms, the struggle—all of that looks very different.
And when you pair it with Sam Altman’s interrogation-style answers and contradictions, it leaves a lot of people very uneasy.
Let me know what you think.