Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How AI chatbots are changing how we write and who we trust

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022. (Photo by Jakub Porzycki/NurPhoto via Getty Images)
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022. (Photo by Jakub Porzycki/NurPhoto via Getty Images)

ChatGPT is one of the most sophisticated AI chat bots ever released. With just a few prompts, it can write almost anything.

And in some cases, it can write better than a human.

“I got a draft from a student, and there was a paragraph of eight sentences, and it was a mess,” high school teacher Daniel Herman says. “And I took that paragraph, and I put it into ChatGPT, and ChatGPT made it shine. It kept this student’s words. It just made them more clear.”

But it’s also been used as a tool for hacking:

“In different darknet and hacking communities we monitor, there are cyber criminals already talking about it, using it, sharing their knowledge about it,” Sergey Shykevich, threat intelligence group manager, says.

Today, On Point: What ChatGPT is, and how it works. Will it change how much you can trust what you read, or hear? For example… who actually wrote these words?

Guests

Sarah Myers West, managing director of the AI Now Institute, which studies the social implications of artificial intelligence. (@sarahbmyers)

Gary N. Smith, professor of economics at Pomona College. Author of several books, including The AI Delusion. (@StandardDevs)

Also Featured

Daniel Herman, English teacher at Maybeck High School in Berkeley, California.

Beatrice Nolan, junior reporter at Business Insider. (@beafreyanolan)

Sergey Shykevich, threat intelligence group manager at Check Point.

Interview Highlights

What is ChatGBT?

Sarah Myers West: “In order to understand what ChatGBT is, we kind of have to take a step back and look at what’s artificial intelligence. Like broadly speaking. It’s a term that gets thrown around a lot. We’ve seen it crop up in movies, but there’s a lot going on there. And let’s try and unpack that first. So AI as a field has been around for almost 80 years now, and it’s meant really different things over the course of those 80 years. But right now, what AI really refers to is largely a set of data centric technologies that need two things in order to work well.

“One is massive, massive amounts of data and then a lot of computational power in order to process that data. What it doesn’t mean is anything that’s like, you know, a really meaningful replication of human intelligence. So a tool like ChatGBT is also referred to as a large language model. It’s, you know, looking at large amounts of human generated text. In the case of chatbots, this is mostly text that’s been scraped from the Internet, websites like Wikipedia and Reddit. And then there’s been humans involved in training it, in looking at what text is going to feel real to us, what replicates the nature of conversational speech. And so there are probably hundreds or thousands of hours of human involved training to teach at GPT to reflect back to us things that we’re going to think look real.

On how ChatGBT works 

Sarah Myers West: “It’s not going to know anything that a human hasn’t written out there on the Internet somewhere. But what it’s doing is it’s generating text based on observed patterns in math, like huge, huge amounts of text pulled from the Internet. And so it’s designed to talk to us.

“You know, I remember a version of this that I played around with when I was a teenager called Smarter Child on AOL Instant Messenger, where you would type in text and it would give you back sort of auto generated text. You know, this is an instantiation on a very long history of, you know, similar technologies. It’s really great at doing so in a way that feels very real to us. And one of the reasons for that is just there’s way more data being used to train it.”

What is OpenAI?

Sarah Myers West: “It’s a company that was founded by Sam Altman. … And the company is really focused on developing general purpose AI, or artificial general intelligence is another way of phrasing that. It’s sort of AI systems that could be used for any number of different purposes. They’ve released DALL·E, which is more image based. But largely, they all kind of work in a similar way, which is huge amounts of data and an API that lets you interact with that data in a variety of ways.”

On costs and benefits of ChatGBT

Gary N. Smith: “The benefits … one thing is entertainment. I mean, you write an ode to Dirty Socks in the style of Shakespeare. You get an entertaining answer. But what other purpose would you want to generate unreliable prose? What benefit could there be from that? And on the cost side, I got three. And you and Dr. West talked about the electricity and the water and those costs. But there’s also a huge opportunity cost. You remember the quip from a Facebook guy, The best minds of my generation. I try to figure out how to get you to click on an ad.

“In here, I don’t know if it’s the best minds, but it’s certainly very, very smart people working really, really hard. To generate unreliable B.S. They could be doing something far more useful than this, and instead they’re not. And partly because there’s a profit motive there, there’s going to be a bubble in this stuff, not just Microsoft investing. There’s going to be dozens, hundreds of startups that claim they can do, you know, fake it till you make it and claim they can do all sorts of amazing stuff with large language models that in fact they can’t do. Yeah, because they’ve got this fundamental problem that they can’t tell truth from falsity.”

On the EU AI Act

Sarah Myers West: “The EU AI Act, which is a set of regulations that are under discussion currently, one of the big, big topics there is whether or not these general-purpose systems would be included or not under the AI Act. And currently that would include, you know, needing to have some baseline guardrails around cybersecurity, around risk management, around transparency and accuracy. And so I think watching that space is one key place to keep tracking this conversation as it unfolds.”

This article was originally published on WBUR.org.

Copyright 2023 NPR. To see more, visit https://www.npr.org.