海角大神

Meet Sora: AI-created videos test public trust

|
Michael Dwyer/AP/File
The OpenAI logo is displayed on a cellphone with an image on a computer monitor generated by ChatGPT, Dec. 8, 2023. OpenAI is now diving into the world of artificial intelligence-generated video with its new text-to-video generator tool, Sora.

In a world where artificial intelligence can conjure up fake photos and videos, it鈥檚 getting hard to know what to believe.

Will photos of crime-scene evidence or videos of authoritarian crackdowns, such as China鈥檚 Tiananmen Square or police brutality, pack the same punch they once did? Will trust in the media, already low, erode even more?

Such questions became more urgent earlier this month when OpenAI, the company behind ChatGPT, announced Sora. This AI system allows anyone to generate short videos. There鈥檚 no camera needed. Just type in a few descriptive words or phrases, and 惫辞颈濒脿, they turn into realistic-looking, but entirely computer-generated, videos.聽

Why We Wrote This

A story focused on

OpenAI鈥檚 Sora, a text-to-video tool still in the testing phase, has set off alarm bells, threatening to widen society鈥檚 social trust deficit. How can people know what to believe, when they 鈥渃an鈥檛 believe their eyes鈥?

The announcement of Sora, which is still in the testing phase, has set off alarm bells in some circles of digital media.

鈥淭his is the thing that used to be able to transcend divisions because the photograph would certify that this is what happened,鈥 says Fred Ritchin, former picture editor of The New York Times Magazine and author of 鈥淭he Synthetic Eye: Photography Transformed in the Age of AI,鈥 a book due out this fall.

鈥淭he guy getting attacked by a German shepherd in the Civil Rights Movement was getting attacked. You could argue, were the police correct or not correct to do what they did? But you had a starting point. We don鈥檛 have that anymore,鈥 he says.聽

Technologists are hard at work trying to mitigate the problem. Prodded by the Biden administration, several big tech companies have agreed to embed technologies to help people tell the difference between AI-generated photos and the real thing. The legal system has already grappled with fake videos for high-profile celebrities. But the social trust deficit, in which large segments of citizens disbelieve their governments, courts, scientists, and news organizations, could widen.聽

鈥淲e need to find a way to regain trust, and this is the big one,鈥 says Hany Farid, a professor at the University of California, Berkeley and pioneer in digital forensics and image analysis. 鈥淲e鈥檙e not having a debate anymore about the role of taxes, the role of religion, the role of international affairs. We鈥檙e arguing about whether two plus two is four. ... I don鈥檛 even know how to have that conversation.鈥

While the public has spent decades struggling with digitally manipulated photos, Sora鈥檚 video-creation abilities represent a new challenge.

鈥淭he change is not in the ability to manipulate images,鈥 says Kathleen Hall Jamieson, a communication professor and director of the Annenberg Public Policy Center at the University of Pennsylvania. 鈥淭he change is the ability to manipulate images in ways that make things seem more real than the real artifact itself.鈥

The technology isn鈥檛 there yet, but it is intriguing.聽In , a video of puppies playing in the snow looks real enough, another shows three gray wolf pups that morph into a half-dozen as they frolic, and an AI-generated 鈥済randmother鈥 blows on birthday candles that don鈥檛 go out.

While the samples were shared online, OpenAI has not yet released Sora publicly, except to a small group of outside testers.聽

Reuters/File
A green wireframe model covers an actor's face during the creation of a synthetic facial reanimation AI video, known as a deepfake, in London Feb. 12, 2019.

A boon to creative minds

The technology could prove a boon to artists, film directors, and ad agencies, offering new outlets for creativity and speeding up the process of producing human-generated video.聽

The challenge lies with those who might use the technology unscrupulously. The immediate problem may prove to be the sheer number of videos produced with the help of generative AI tools like Sora.

鈥淚t increases the scale and sophistication of the fake video problem, and that will cause both a lot of misplaced trust in false information and eventually a lot of distrust of media generally,鈥 Mark Lemley, law professor and director of the Stanford Program in Law, Science and Technology, writes in an email. 鈥淚t will also produce a number of cases, but I think the current legal system is well-equipped to handle them.鈥

Such concerns are not limited to the United States.

鈥淚t鈥檚 definitely a world problem,鈥 says Omar Al-Ghazzi, professor of media and communications at the London School of Economics. But it鈥檚 wrong to think that the technology will affect everyone in the same way, he adds. 鈥淎 lot of critical technological research shows this, that it is those marginalized, disempowered, disenfranchised communities who will actually be most affected negatively,鈥 particularly because authoritarian regimes are keen to use such technologies to manipulate public opinion.

In Western democracies, too, a key question is, who will control the technology?

Governments can鈥檛 properly regulate it anytime soon because they don鈥檛 have the expertise, says Professor Hall Jamieson of the Annenberg Center.

Combating disinformation

The European Union has enacted the Digital Markets and Digital Services acts to combat disinformation. Among other things, these acts set out rules for digital platforms and protections for online users. The U.S. is taking a more hands-off approach.

In July, the Biden administration announced that OpenAI and other large tech companies had voluntarily agreed to use watermarking and other technologies to ensure people could detect when AI had enhanced or produced an image. Many digital ethicists worry that self-regulation won鈥檛 work.聽

鈥淭hat can all be a step in the right direction,鈥 says Brent Mittelstadt, professor and director of research at the Oxford Internet Institute at the University of Oxford in the United Kingdom. But 鈥渁s an alternative to hard regulation? Absolutely not. It does not work.鈥

Consumers also have to become savvier about distinguishing real from fake videos. And they will, if the Adobe Photoshop experience is any guide, says Sarah Newman, director of art and education at Berkman Klein Center鈥檚 metaLAB at Harvard, which explores digital art and humanities.

Three decades ago, when Photoshop began popularizing the idea of still photo manipulation, many people would have been confused by a photo of Donald Trump kissing Russian President Vladimir Putin, she says. Today, they would dismiss it as an obvious fake. The same savvy will come in time for fake videos, Ms. Newman predicts.

Photojournalists will also have to adapt, says Brian Palmer, a longtime freelance photographer based in Richmond, Virginia. 鈥淲e journalists have to give people a reason to believe and understand that we are using this technology as a useful tool and not as a weapon.鈥

For more than 30 years, he says, he鈥檚 been trying to represent people honestly. 鈥淚 thought that spoke for itself. It doesn鈥檛 anymore.鈥 So, a couple of months ago, he put up on his website a personal code of ethics, which starts, 鈥淚 do not and will not use generative artificial intelligence in my photography and journalism.鈥

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
海角大神 was founded in 1908 to lift the standard of journalism and uplift humanity. We aim to 鈥渟peak the truth in love.鈥 Our goal is not to tell you what to think, but to give you the essential knowledge and understanding to come to your own intelligent conclusions. Join us in this mission by subscribing.
QR Code to Meet Sora: AI-created videos test public trust
Read this article in
/Business/2024/0226/Meet-Sora-AI-created-videos-test-public-trust
QR Code to Subscription page
Start your subscription today
/subscribe