Next Story
Newszop

Did American astrophysicist Neil deGrasse Tyson really 'admit' the earth is flat?

Send Push
It’s flat.
The words sound jarring coming from Neil deGrasse Tyson , a man who has spent decades dismantling pseudoscience and explaining the cosmos with unwavering clarity. In a viral clip from his StarTalk YouTube channel, he says, “Lately, I have been doing calculations as well as looking back at old NASA footage and raw data from satellites hovering above Earth. And I just can't escape the conclusion that the Earth might actually be flat,"”

Except it isn’t him.

Moments later, the real Tyson appears on screen, holding up a phone playing the same video. “That’s not me,” he says evenly. “It was never me. Those aren’t my words.” The clip is a deepfake , an AI-generated fabrication indistinguishable from the real thing.

It’s fitting, in a way. Tyson’s voice and likeness have become a staple of the internet’s science-adjacent culture and its hyper-stimulated content format, stitched into split-screen videos, layered over Roblox gameplay loops, and engineered to seduce the doomscrolling masses from ever scrolling away. His credibility, once a safeguard against misinformation, now makes him its most convincing vessel, an unwilling participant in an era where truth itself can be forged, remixed, and repackaged.



Neil gets Deepfaked

The surprising declaration, and the AI-generated clip that sparked it, appeared during a recent episode of StarTalk, Neil deGrasse Tyson’s YouTube show. The video, titled “ It’s Getting Harder To Know What’s Real” features Alexandru Cosoi, Chief Security Strategist at Bitdefender, a cybersecurity expert leading the company’s cyber-intelligence team in darknet investigations, post-breach forensics, and international cybercrime prevention.

Together, they discuss how artificial intelligence can now clone a person’s face, voice, and cadence with startling accuracy, and the growing challenge of distinguishing parody from manipulation in the digital age .

“I didn’t think much about deepfakes , until I got deepfaked,” Neil deGrasse Tyson admits.

At first, he didn’t see the harm. “The early stuff is fine if it’s parody,” he says. “One of my favorite examples is when I was ‘babyified’ in a real conversation I had with Theo Von on his podcast. You’re not thinking to yourself, ‘Did Neil actually become a baby to do this?’ Because it’s parody. It’s one of the most cherished means of expression we have in the United States.”

But that line, between parody and deception, is fast disappearing. “When you do this and the viewer doesn’t know it’s parody, then you’re crossing a line,” he says.

He’s seen his likeness repurposed for fabricated science scripts written by others, the deepfake Tyson earnestly delivering false explanations in his voice. “Some of them try to spread more science through my persona,” he says. “But often, the science is wrong.”

Even his friends have been fooled. A convincing video of Tyson narrating a grand theory about a Type III civilization, set to the Interstellar soundtrack, led actor Terry Crews to message him in admiration, only to learn it wasn’t real.

“I’m flattered that people want to put me into content in ways that attract audiences,” Tyson says. “But if it’s fooling people, and they’re not thinking, ‘Oh, this is parody’ or ‘This is just for fun,’ then it violates the integrity we’ve worked so hard to build. Something’s got to be done about that. And something will.”

The stakes of political Deepfakes

“Of course, a science video or a celebrity deepfake may not have the same global consequences as a political one that affects peace or stability,” notes Alexandru Cosoi, Chief Security Strategist at Bitdefender.

He recalls the early months of the Russia–Ukraine war, when a hacked Ukrainian TV station broadcast a fabricated video of President Zelenskyy announcing a surrender to Russia, followed by another showing Vladimir Putin declaring, “We’re finally getting to peace.”

“They weren’t technically very good, Zelenskyy’s head looked slightly larger than normal, but people with limited internet access or few media options might still believe it,” Cosoi explains. Zelenskyy later had to appear on video himself to confirm it was fake.

Similar tactics have surfaced during election campaigns. Deepfakes depicting politicians taking bribes or discussing wars have been released just before polling days, when candidates are legally barred from responding, tilting public sentiment at the last moment.

Scams to watch out for

Cosoi says the same technology now powers a darker trade: scams that mimic loved ones, bosses, or entire virtual meetings.

“Scamming isn’t new,” he says. “But with AI in the hands of bad actors, it’s been taken to another level.”

He outlines the main types:

  • Romance or investment scams, where fraudsters build trust over chat before persuading victims to invest.
  • Business email compromise scams, such as a Hong Kong case where a worker was tricked into transferring $25 million during a deepfaked video call with fake ‘executives’.
  • Family or ‘relative in distress’ scams, using cloned voices to mimic children or parents pleading for money.

Asked by Tyson how people can protect themselves, Cosoi admits the defences are limited. “I stopped answering unknown calls,” he says. “In the past year, almost every one has been a scammer.”

Still, there are new tools on the horizon. AI “honeypots” such as Scamo now engage scammers to waste their time and collect data, helping improve detection systems. Researchers are also developing technology that can analyse videos, images, and audio, not only to assess how fake something is but to highlight which parts were altered.

“It’s a race,” Cosoi concludes, “between how fast we can build detection, and how fast deepfakes can evolve.”Are we losing against deepfakes?

Are we losing against Deepfakes?
Deepfake technology, once a parlour trick for internet pranksters, has become one of the most disruptive forces shaping how truth circulates online. Built on deep learning, it uses artificial intelligence to generate uncanny audio, video, and imagery, making people appear to say or do things that never happened.

Consumer apps have only accelerated the trend. Platforms like Sora have democratised deepfake creation, putting the technology in the hands of millions, and fooling countless Facebook mums in the process. What began as a playful novelty for the tech-curious has evolved into a production line of deception, churning out synthetic faces and false realities that even good old common sense can’t detect.

Tyson, meanwhile, has become one of its most recognisable victims, the archetype of a deepfaked intellectual. “Will there come a time when deepfake AI becomes so good that no tool can detect it, rendering these defences useless?” he asked.

Maybe. One day, a deepfake might be more appealing to a person than the truth, even if detection tools say it’s fake. People might say, ‘No, no, this has to be true.”

For Tyson, that’s already the reality. He’s watched digital versions of himself hawk everything from sneakers to soft drinks, and deliver pseudoscientific sermons he never wrote. “Let me be clear,” he said. “I have never, and will never, do that. If you see me endorsing something, it’s not me. It’s a deepfake. Pure and simple.”

Tyson, as ever, resists instruction. “I don’t tell you what to do,” he said. “Except for one thing I do tell you every single day, and you know what that is? Just look up.”
Loving Newspoint? Download the app now