Play Live Radio
Next Up:
Available On Air Stations

Sorting real from fake in a world where AI can create new content

Jevin West, co-founder of the University of Washington's Center for an Informed Public, poses for a picture during MisinfoDay, an event West created to help high school students identify and avoid misinformation, Tuesday, March 14, 2023, in Seattle. Educators around the country are pushing for greater digital media literacy education.
Manuel Valdes
Jevin West, co-founder of the University of Washington's Center for an Informed Public, poses for a picture during MisinfoDay, an event West created to help high school students identify and avoid misinformation, Tuesday, March 14, 2023, in Seattle.

The voice you hear on the radio. The picture attached to this story. The text you're reading this very moment.

We're living in a world in which artificial intelligence could create all of these things out of thin air — no humans necessary. (In this case, it didn't. Everything at KNKX, for now, is 100% human-made).

This technology, known as "generative AI," can write everything from poetry to scientific papers to computer code. It can generate photo-realistic human faces, make videos, and compose music that sounds eerily like the work of Drake and The Weeknd. You may have already played with ChatGPT or ordered DALL·E to make you a picture.

So how do we know what's real anymore?

That question concerns Jevin West, who researches misinformation at the University of Washington and co-founded the school's Center for an Informed Public. He also co-teaches a class on "calling" what — for the purposes of this public radio interview — he politely calls "B.S."

"My biggest concern," West said of generative AI models, "is that they're basically B.S.-ers at scale. And that will make it more and more challenging to tell what's true or not online, which is already challenging."

West spoke with KNKX special projects reporter Will James, who reports on ideology, democracy, and information.

Listen to their conversation above, or read selected quotes below.

Key Takeaways

On his biggest concerns:

I think the things that I'm worried about are the increase in scamming — that it's going to become easier and easier for scammers to do the awful work that they do.

And then, on the political side and the propaganda side, you can flood public commentary around policies that are being discussed at the federal level and the state level, and that will make it harder and harder for fact checkers to do their work, journalists to do their work to figure out who's real and who's not. And there's a lot of discussions right now about how to overcome that.

I think we will hopefully be able to overcome that with things like watermarking of imagery and text and trying to find ways to guarantee that whoever's producing the content is actually the person they say they are.

On how AI can generate overwhelming amounts of fake content, including social media posts:

We'll have to find ways to penalize this kind of behavior, to try to find ways to detect it. But it's coming. The nightmare scenario, of course, is an authoritarian government using this technology to shift public opinion easier than they are already with the current technology.

It's not that unfathomable to think that that could happen with this, if you can flood everything that everyone's seeing and you build a big narrative that fits whatever narrative you want to push out. This is moving so fast and it's concentrated in a very small number of hands. And even if they have good intentions, there are things that they may not account for.

On parallels with the advent of social media:

Social media is something we can look to as an example of a technology that we didn't know what the implications were.

We certainly saw some good and then we saw some bad. But it's really transformed society and, in many ways, I think not all for the better. And I think if we had the ability to go back in time, pump the brakes a little bit there, and think about it a little bit more, maybe there would be a little bit less harm when it comes to the addictions that people have and the ways in which disinformation, misinformation have passed so easily on these kinds of platforms. And so we have a similar challenge ahead of us now.

And I don't want us to go run into a cave and hide and never use technology again. It can do wonderful things. But I do think that we need some panels. We need people in government to know how this technology works. I don't think this is a situation where we should just let tech companies just take care of it for us.

On having the right level of skepticism online:

We don't want everyone to be so cynical that they don't believe anything that's out there. And that's scary. We want people to have just the right amount of skepticism. Although, what is that amount? That's the challenging part.

We already have a very big challenge ahead of us trying to figure out what's true and what's not online, because humans create a lot of garbage, too. So that is something that's not new to us. It's just that you can do it faster now with this technology. You can scale it, you can do it cheaply.

And so I think one of the things that we really encourage people to do, regardless of whether this chat bot era existed or not, which it does now, is just to use some of the same kinds of tools. If it sounds too good to be true, if it sort of elicits some emotional reaction, if it is confirming, sometimes you got to be a little bit more cautious.

We always say, 'Think more, share less.' And so spend a little more time, before you share something, doing a little bit of work of investigating.

Will James reports on ideology, covering how information spreads, how beliefs form, and how those beliefs interact with the real world.