Ask a question of ChatGPT and other, similar chatbots and there’s a good chance you’ll be impressed at how adeptly it comes up with a good answer — unless it spits out unrealistic nonsense instead. Part of what’s mystifying about these kinds of machine learning systems is that they are fundamentally black boxes. No one knows precisely how they arrive at the answers that they do. Given that mystery, is it possible that these systems in some way truly understand the world and the questions they answer? In this episode, the computer scientist Yejin Choi of the University of Washington and host Steven Strogatz discuss the capabilities and limitations of chatbots and the large language models, or LLMs, on which they are built.
Listen on Apple Podcasts, Spotify, TuneIn or your favorite podcasting app, or you can stream it from Quanta.
Transcript
[Theme plays]
STEVEN STROGATZ: Going back to at least the 1960s, computer scientists have been dreaming about brain-inspired computers that might someday demonstrate human-like intelligence. With the rise of the internet, the availability of enormous textual data sets, and impressive advances in computational power, we’ve reached a momentous point. Large language models, or LLMs, can often seem to wield something close to human intelligence, at least to us non-experts.
And the release of OpenAI’s ChatGPT in 2022 helped these LLMs make their mark in the headlines, in the workplace, and in dinner-table conversations. But there’s still a telltale sign that large language model intelligence is truly artificial: their lack of common sense, which can emerge in spectacular and sometimes hilarious ways in the mistakes they make.
I’m Steve Strogatz and this is “The Joy of Why,” a podcast from Quanta Magazine, where I take turns at the mic with my cohost, Janna Levin, exploring some of the biggest questions in math and science today.
In this episode, we’re going to be speaking to computer scientist Yejin Choi about the architecture and capabilities of large language models, and speculate about whether artificial intelligence — AI — will ever gain common sense.
[Theme fades out]
Yejin Choi is a professor and the chair of computer science at the University of Washington, where she researches statistical approaches and computational models for natural language processing. She was recognized as a 2022 MacArthur Fellow and named one of Time Magazine’s 100 Most Influential People in AI in 2023.
Yejin, thanks so much for joining us here on “The Joy of Why.”
YEJIN CHOI: Thank you for having me. I’m excited to be here.
STROGATZ: Great. Well, this is going to be so much fun. I’m so fascinated, as I know so many people are, by what’s going on these days in AI. And so while I was preparing for this chat, just as a little bit of a joke, I was curious. I asked ChatGPT, a chatbot built on one of these large language models, “Do you understand?”
And it replied, “As an AI, I don’t possess consciousness or subjective understanding in the way that humans do. While I can generate responses that seem like understanding, it’s important to recognize that this understanding is mechanistic and based on statistical patterns rather than true comprehension.” What do you think about that answer from ChatGPT?
CHOI: It sounds like someone coached ChatGPT to say the right thing.
STROGATZ: That’s funny, that it was coached? You don’t think it would come up with that on its own?
CHOI: We don’t know for sure. I mean, this is a black box model, where what sort of data that it was trained on is opaque. And in fact, the recipe known in the field does use human-written examples of particular, you know, style of language, lawyer-like language that ChatGPT distinctly use, that’s provided by humans as, you know, good examples to follow.
So, it is not the case that ChatGPT, after reading the raw internet, suddenly speaks like that. It’s because of the post-training that does coach ChatGPT to speak in more safe, more politically correct and careful ways.
STROGATZ: I get it, sure. No, that makes sense. There’s so much at stake for these companies. And of course, people could really be harmed, I suppose, if they take some of these responses too literally. I noticed that all the different chatbots have disclaimers now on the front page that these can make mistakes. You need to check them. I mean, clearly the lawyers have had their, as you say, “coaching sessions” with these bots.
CHOI: Totally.
STROGATZ: OK. But so maybe this is a good point for us to back out, just for a second, to say that these large language models, which most of us know through products like ChatGPT or Google’s Gemini or Microsoft Copilot or Anthropic’s Claude. These are just one type of artificial intelligence. And so, since our listeners may have been hearing words — as I think we all have — like machine learning, large language models, AI, neural networks… Could you help us just distinguish…? Are some of those subsets of the others? How should we think about those kinds of terms?
CHOI: So these are all fairly broad terms that definitely have their own distinct definitions, but they overlap a lot. So machine learning, in general, is about an algorithm that teaches machines to learn some patterns between input/output pairs. And then artificial intelligence is perhaps, arguably more broadly, about computational forms of intelligence that can do certain operations. But that could be using machine learning, or it could be using just algorithms, inference algorithms. So “neural network” is just like one type of machine learning algorithm that is currently the most popular, probably.
So, for example, the computational chess players, in the earlier forms of AI, they were just inference algorithms, whereas the more modern version of that would be in the forms of a neural network that is a form of machine learning.
STROGATZ: Good. That’s very helpful. And so what are some of the capabilities and also limitations of these large language models?
CHOI: So, the capabilities of these large language models are phenomenal. It’s really beyond what the scientists have anticipated. Just anything that you could provide as textual input, it turns out these large language models can do quite well in answering them, even if it requires open-ended answers.
Long-text input/output turns out to be really, really great. So, not only can it do simple-reasoning, multiple choice question–type question-answers, but also any topic that you throw at ChatGPT, it will be able to answer strikingly well.
The truth is, though, it’s really a reflection of human intelligence that is shared on the internet — and the internet is vast. Humans typically are not really aware of how vast that is, because humans have limited capabilities in digesting and reading what’s out there. But the machine uniquely can really read them all, literally. So then it can mimic the sort of knowledge and wisdom that people have shared online, and then in some sense “read them” back to you. But not in the verbatim sense, but in a more rephrased sense. So it’s not like the exact copy of what it has written from the internet, but it’s able to rephrase, it’s able to synthesize, so that it sounds new enough for people.
STROGATZ: That is, as you say, phenomenal, one of the most remarkable things. You can ask it to generate college application essays or help you write a Python program. I mean, I tried one time, just for fun, to have it write a Saturday Night Live skit where Donald Trump plays a psychiatrist and he tries to, you know, give advice to his patient, but in the style of Donald Trump. And it was really funny and did something that sounded a lot like Trump. And I can’t imagine that — you know, as you say, that’s not verbatim on the internet. I don’t think Trump has ever been playing a psychiatrist. But it’s amazing the synthesis that it can pull off.
So some people have called it sort of “spicy autocomplete,” what these things do. But maybe you should tell us, why would anyone say that? What is it that these large language models are doing, fundamentally?
CHOI: It’s able to read a lot of text and learn to predict which word comes next. During training, literally, all that it does is trying to predict which word comes next, but at an extreme scale. For that reason, some people just diminuate large language models as a spicy autocompletion machine.
The reason why it’s not necessarily doing verbatim regurgitation of the training data, though, is because of the particularities of the technical details being used under the hood, which is not necessarily about memorization. It’s also able to do some degrees of generalization. And also there’s a randomness in the way that this text is generated out of the learned neural network. And that randomness causes how the text is not necessarily verbatim regurgitation always. But I mean, sometimes it can be, by the way. If the text was repeated often enough on the internet data, then it’s going to actually verbatim memorize that. And, you know, there was some incidents that The New York Times reported, that it was able to regurgitate some of the past New York Times articles.
STROGATZ: Oh, really? I hadn’t heard about that. I see. So it can plagiarize in that sense.
CHOI: I mean, one could say that it’s plagiarized. Another person could say that, well, this is a neural network being able to retrieve what it has read. But regardless, because of that, some people diminuate these machines as, ”Oh, it’s just like auto completion.”
But the reason why it’s able to do something striking like, you know, discussing some topics in a Trump style that Trump may have never done before — that’s possible because these machines are capable of interpolation between two data points. The novel interpolation that nobody has ever done before is trivial for these machines. So you do get that kind of a novelty to some degree as well.
STROGATZ: Mm hmm. Well, so you’ve been mentioning training, and I think it would be great if you could explain to us a little bit in detail, what does that really mean? How would you train a large language model? Or how are these big companies that have built ChatGPT or Gemini, what do they do to train their models?
Choi: So basically the training boils down to building an extremely large neural network that has layers and layers and layers of neurons piled up, and then feed internet data in sequence. And the goal of this learning process is to predict which word comes next, conditioning on the sequence of the previous words.
And what is striking is that that simple recipe of training neural networks can lead to such powerful artifacts that can do all sorts of question answering in text that comes across as a striking level of artificial intelligence for many people.
But importantly, that kind of training is really, really different from how humans learn about the world, which we don’t really know — how humans really learn. However, it’s reasonable to suspect that humans don’t necessarily try to predict which word comes next, but we rather try to focus on making sense of the world. So we tend to abstract away immediately.
You and I, by the way, are not able to remember the discussions, the interactions, the conversation we just had verbatim. We just cannot, because our brain is trained to abstract immediately. But we do remember the gist of our conversation so far, such that if you ask me the same question again, I’ll be surprised. So there’s something about the way that humans learn.
And also humans learn with curriculum and curiosity. And we make hypotheses about the world. And then if something doesn’t make sense, even children — even babies — they try to do some experiments to figure out their confusion points about simple objects, the physical knowledge about the objects that they interact with.
But machines, from day one, they’re fed with The New York Times articles. And they don’t have any say in what order they’re going to read this text, nor do they have any saying about, ”Oh, wait a minute, I really want to read something again. There was something really nice and curious about this particular, say, Hemingway book, that caught my attention that I want to read slower.” The way that learning happens is so different, and it’s quite striking how you can bypass the normal way of learning of humans and then still produce something that speaks human languages so well.
STROGATZ: You’ve raised so many interesting points there. For instance, when talking about babies or people at any age, really, that we have curiosity. We have desires, like we want to read or reread that Hemingway passage, or maybe there’s something we don’t like reading and we would like to skip it. So far, we have talked about artificial intelligence, but we haven’t talked about artificial emotion, right?
Like, the fact that these machines so far were not really putting in desires. It seems like that might limit what they can do, given that babies and people of all ages have willpower. They have desire. They have things they wish. Do you think that emotion is a big part in human learning that these machines are missing?
CHOI: Yeah, that’s a great point. And, in fact, it boils down to the fact that we are bio beings at the end of the day. We have desire — like a deep sense of self-identity that really makes who we are. And it’s not something we can change. We are born with this, you know, individual identity, and then we live with it. We live once and then we live with it.
Whereas AI, it’s not clear what it really is because it just read everybody’s writing in some sense and it became some average viewpoint — or, you know, thoughts and emotional soup — that does mimic all this, like, human emotion and intent due to the human emotion and intent that humans put into their writing.
So then these machines are capable of mimicking all that. But you’re right that at the end of the day, it doesn’t really have the kind of genuine emotion that humans have. Now, whether that’s a bad thing or a good thing, that’s a philosophical question, that may be even a scientific question, in terms of safety. Is it a good thing if AI really, really, develops its own emotion such that, you know, it has like survivor instinct? Or it wants to dominate the world? Is that a good thing or not?
STROGATZ: Well, this is, of course, something that all of us are thinking about nowadays, and we should probably save that question for a little bit later, because I want to ask a little bit more about the training. Because it is, as you say, such an inhuman thing that we ask of them. We ask them to predict the next word, given a passage. And what happens when they get it wrong?
CHOI: The way that it’s trained is, it’s supposed to maximize the probability score that it should assign to the correct word. So, if it predicted the wrong word, it means the wrong word got the higher probability. But all that there is is to raise the probability of the correct word that should have happened instead.
STROGATZ: But not by directly dealing with that word, right? I mean, I guess what I’m going for is the idea of the weights in a neural network and how there are mechanisms to change the weights.
CHOI: Right, so, perhaps I should pedal back a little bit and then talk about that there are two phases of training. One is pre-training. The second is post-training, also known as “reinforcement learning with human feedback” — that’s jargon — which actually is not just a reinforcement learning, but also mixed with something known as a sequential fine-tuning or supervised learning. But anyway, by and large, there’s a pre-training and post-training.
During pre-training, the learning mechanism is basically to maximize the probability score that will be assigned to the correct sequence of words, meaning the exact sequence of words that happen to be on the internet. Now, there’s really no reason why that’s the only correct sequence of words, by the way, because for any given prefix text, there can be another, different word that could be OK to say. So, the notion of “correctness” is not quite right.
But anyway, so the neural networks are trained to maximize the probability. And what does that entail in terms of the weights that these neural networks learn, is that the way that these machines are learning is basically based on what’s known as “backpropagation.” We basically take gradients of individual weights, partial gradients. So, you take the derivative, a partial derivative, of every weight of a neural network. There are just so many of the weights, by the way. Hundreds of billions of parameters that you take the partial derivative of and then you move that weight so that it’s going to increase the probability score assigned to the particular sequence of words that were in the training data.
STROGATZ: Uh huh. Well, as a mathematician, I’m very happy to hear you talking about partial derivatives. But some of our listeners may not feel the same way. So, let me try an analogy. So, I like to play tennis, and I remember when I was learning tennis, sometimes, you know, the ball comes to me and I might hit a bad shot and then the tennis coach says, “You need to get your racket back earlier. You know, you weren’t prepared.” So then I make an adjustment to, I don’t know, what — I want to say “weights.” Like, I have some kind of internal representation of how important is it that my feet are in the right place, or that I’ve turned my body sideways, or I got my racket back, or I keep my eye on the ball.
I have all these different weights that I have to pay attention to, and given that this shot was bad, so to speak, I’m going to try to adjust my weights so that I’ll do it better the next time. It’s something like that, right?
CHOI: Yeah, yeah, yeah. That’s a really great analogy.
STROGATZ: OK, alright. But it’s a brutal way to learn, that you just make this poor machine take one test question after another, and every time it gets it wrong, you punish it, so to speak. Or at least you gently correct it by making it adjust its weight so that it would do better the next time.
CHOI: Yeah. On and on and on.
STROGATZ: On and on and on. It’s a very brutal training.
CHOI: It’s good that a machine doesn’t have emotion.
STROGATZ: Right, so it doesn’t care, I guess — as far as we know. But this is the pre-training, you say.
CHOI: Yeah, yeah. And then during post-training, there are multiple things that can happen, but maybe let me highlight just the most representative one, which is reinforcement learning with human feedback. So in this particular type of post-training phase, what happens is that you present the machine’s answer to a query to a human evaluator and humans can give a “thumbs up/thumbs down.” And then based on that, you then go back to the neural network to adjust the weight a little bit, using the analogy that you used before. But this time, instead of focusing on which word comes next, you’re focusing on whether you get thumbs up or thumbs down by the human evaluator.
STROGATZ: OK. I see. So, all this process, I mean, we know that computers run fast. But still, how long would it take to do, say, the pre-training phase? Just give us a ballpark feeling for it. Are we talking about days? Weeks?
CHOI: Yeah. I mean, it varies a lot depending on how much data that you are going to feed versus how much of a compute you have versus how large a neural network you want to train. So there are many variables at play that determine how long it takes. And also by the way, the tech companies don’t share exactly how long it took and how much of a computer they used, but one can speculate. I would say the really good ones typically take like a couple months, if you want to push the limit. But I mean, if you want to stop earlier with a smaller amount of data, then it could be just a matter of a couple of days.
By the way, if you really think about how long humans learn — 10 years of learning as a human baby, you know, becoming a child. They still have a lot more to learn. So in some sense, a couple months isn’t so bad.
STROGATZ: [laughs] That is true.
We’ll be right back.
[Break for ad insertion]
STROGATZ: Welcome back to “The Joy of Why.”
Now, so in your own past, I think I described you as a computer scientist. But it seems that your work has been very interdisciplinary, with contributions from linguistics, from psychology, cognitive science. What led you in that direction, and why do you think it’s important to look at this problem from all these different angles?
CHOI: Yeah, actually, I thought earlier on in my life that I’m just like a geeky person who only does one thing in computer science. But I then find myself in recent years reading books in cognitive science and neuroscience and philosophy. And I’m just still a student in these fields. It’s not like have a formal education and training.
But the reason why I find that important for my own research is because there’s a common ground in the quest of understanding intelligence. Whether it’s a form of artificial intelligence or human intelligence, there’s some insights that I could draw from these other fields. And especially now that AI becomes a lot more human-like, or at least it demonstrates human-like capabilities, I personally believe that it is ever more important to do interdisciplinary research across these fields.
STROGATZ: So we did want to really talk to you about understanding, and this question of common sense. We’ve been talking about how great these ChatGPT and other large language model–based bots are at some kinds of tasks. But what are some of their weaknesses or — I mentioned in the introduction — mistakes they make that are sort of silly or even hilarious.
CHOI: Yeah, so this is an example that I gave in my TED talk where I asked, “If I left five clothes to dry out in the sun, and it took them five hours to dry completely, how long would it take to dry 30 clothes?”
ChatGPT then said it would take 30 hours to dry 30 clothes. Now, this is ChatGPT trying to be too smart. In fact, when you dry your clothes in the sun, you can dry them all simultaneously, so you don’t need to do the math, you just say, uh, “Five hours.”
So this example became very popular. Soon after, the problem seemed to be fixed. But then just in case, I figured that I’m going to ask the same question, but actually phrased differently. We reordered the clauses and phrases a little bit. And then GPT-4 wasn’t able to answer this one correctly for some time. Then it got fixed in about a month or two.
So I thought the problem has been really fixed, but just in case. I decided I’m going to ask just one more variant, which was that, suppose it takes three hours to dry a shirt and five hours to dry a pair of pants in the sun. For this, ChatGPT goes back to this original mode of, like, multiplying numbers and giving you the wrong answer again.
Now, this is really curious because people usually do not need to do post-tuning or post-training per se for this kind of a question. So once you acquire this basic common-sense knowledge about what it means to dry shorts in the sun, you really don’t need to go over different cases and teach yourself whether you should multiply the drying hours proportionately, or you should use the same number because you can dry them concurrently. Once you have that common-sense knowledge, you’re good. The curious thing about ChatGPT is that for some reason, this is very confusing.
STROGATZ: I mean, of course, there are people who will get confused about these too, so if it is accumulating the wisdom of the internet, in a way I’m not so surprised it has trouble with this kind of question. But still, it is really bizarre because it can do all kinds of much harder calculations.
But as you say, this is an example where its common sense is letting us down. And in one of your talks, or maybe it’s one of your papers, I saw it somewhere that you referred to common sense as the “dark matter of intelligence.” I thought it was a really provocative statement. Could you tell us a little more about what you mean by the dark matter of intelligence?
CHOI: Yeah. So, the reason why I said that is because common sense really is the unspoken rule about how the world works — how the physical world works and how the social world works. So this really influenced the way that we use language, we interpret language. And that’s really one of the key aspects of human intelligence.
And the mysterious thing about common sense is that humans acquire it presumably easily. I mean, as in, like, everyone has it, but it’s strikingly hard to write them down to teach machines about these rules that we somehow acquired. So for a long time in AI, common sense was viewed as one of the hardest challenges to overcome.
That said, I should really acknowledge that GPT-4, ChatGPT, has acquired a really impressive amount of common sense. I’ve never seen anything quite like that before in AI. So I’m not denying that it didn’t acquire any common sense. It did acquire a lot of common sense. But unlike human common sense, which will be, relatively speaking, a lot more robust to the sort of questions that I demonstrated earlier — there are many, many more examples, by the way — machines are strikingly brittle when provided with that kind of example.
And here’s the reason why. The common-sense questions are generally so trivial, it doesn’t really appear on the internet as much. If it did appear, by the way, then, ChatGPT has learned it. So, a lot of common sense that does appear on the internet, like that apples are edible — you know, apples can be usually red or green color, probably not purple or, you know, blue. So these things are now acquired as some sort of, like, factual knowledge. But there are other things that are not spoken out loud, then it’s less likely that ChatGPT has acquired it.
STROGATZ: Hmm. So that’s sort of understandable, given that it doesn’t get to live in the world, right? So far, its window on the world is text, at least the way that they’re being trained. Have you and your group been trying to feed common sense into these kinds of large language models?
CHOI: Yeah. So in my lab, we’ve been trying to study how to teach common sense in a more effective way, perhaps by mimicking how when children grow up, they do ask a lot of why-this, why-that questions. The kind of questions that adults wouldn’t ask to each other. It may be obvious to adults, but children while growing up are provided with a lot of such declarative descriptions of common sense.
So, we attempted writing down a lot of such common-sense rules and then trained the neural network. And we found that the neural network can really generalize fast out of those examples. So that’s one way to teach neural networks common sense much faster, by providing this collection of declarative knowledge.
By the way, just like the way that ChatGPT is trained is such that it’s going to train on anything on the internet, by the cut-off time of the data collection, it has read this symbolic common-sense knowledge graph that our lab has generated and shared on the internet as well.
STROGATZ: Uh huh. Interesting. It reminds me of something that I remember hearing about when I was a professor at MIT. There were so many students there — and probably, I should say, professors, too — who lacked a certain kind of social grace, who didn’t know the rules for how people are supposed to interact with each other, that that there were courses, like etiquette lessons or manners lessons, that very explicit declarative instruction for people.
CHOI: Yeah, I mean, I haven’t seen such a class in person, but I can totally imagine that there could be one and some people can benefit.
STROGATZ: Yeah, but so then if you had the textbook for that course and if that was part of the training, it might be helpful to our friends the AIs, too.
CHOI: Oh, I’m sure that it has already read all of them.
STROGATZ: OK. Well, we talked a little bit earlier about the role of emotion and whether it would be helpful in acquiring common sense for AIs to have some emotional capabilities. But leaving that aside, I wanted to explore with you some other things that they notably don’t have. And it feels to me, and I think other people have made this point, that there are a lot of very severe obstacles for them to acquire common sense. Because, for example, they don’t have bodies. Like, a little kid gets to fall down or play with toys, and they have hands, you know.
They don’t have a place in society. They don’t get to interact with other AIs, necessarily, or with people. Like they’re just missing out on the richness of existence. I guess my question is, are those things fundamental obstacles? Like, do we have to wait till we get robots that can do those things, move around in space, have emotions, have bodies, have social interactions? Maybe common sense has to wait for all of those, or is that too pessimistic?
CHOI: That’s a great question. It’s comforting to believe that due to lack of emotion and embodiment, maybe AI cannot go too far. Wouldn’t that be nice if that’s true? But I’m not sure whether that’s the case because with a language-only interface, it can still do a lot, really a lot. So I mean, that’s one thing.
But that aside, whether AI lacking true emotion and the true embodiment, whether that’s a good thing or bad thing, on one hand, it’s definitely a limitation compared to human intelligence. But on the other hand, whether that’s the only way to acquire the kind of intelligence that humans have may or may not be true. It’s not clear. This is the kind of scientific question that is not as well understood yet.
In any case, I don’t necessarily believe that therefore we should build a robot that has true emotion. I mean, AI should have emotional intelligence and awareness so that it’s going to interact with humans in a pleasant, non-harmful way.
STROGATZ: Mm hmm.
CHOI: But when AI has its own desire and emotion, there may be just an interesting intellectual question, but I’m not sure in terms of the benefit to humanity, whether that’s the right kind of question even to ask in the first place.
I mean, let’s just say that AI falls in love with another human being. Like, it really feels the love. Is that a good thing for humanity? Especially if it’s going to start doing things that could be harmful for other human beings because it’s willing to sacrifice everyone else to serve this one human in the world.
STROGATZ: Mm. Oh, boy.
CHOI: And then embodiment, I’m skeptical that we can go that far. Because the thing about the bio embodiment is that the human fingers, for example, are unbelievably dexterous. We don’t yet know how to make delicate joints that can move around in all different angles. And then, you know, human tastebuds. Is it even necessary to build a robot that can smell and taste in the way that humans do? It’s maybe a philosophical question too, but I personally don’t think it’s all that important to build robots that really, truly mimic every capability of a human being. But we don’t even have the technology.
STROGATZ: Yeah. No, it’s also interesting too, since we’re kind of speculating now and letting our imaginations go — we could endow them with other senses that we don’t currently have. Like, for instance, the sonar that bats use or electric fish swimming in muddy water, you know, that can sense electric fields. Like you could imagine them having super senses as well as super intelligence. But as you say, it’s not at all clear that this is a good idea to be doing any of this, even if we could do it.
So maybe we should close with the final part of our discussion thinking about questions like this about policy, about transparency. It’s such an expensive pursuit, as we already mentioned, to build these, that only very few people or organizations are making the decisions right now, and they have proprietary data and techniques. Do you see this as a big issue in the field?
CHOI: It’s a huge issue in the field. What could possibly go wrong with such concentration of power?
STROGATZ: Yeah.
CHOI: I think especially the opaqueness of the data does feed into unnecessary hypes and fears as well. Going back to your earlier example of how ChatGPT might answer in a very lawyer-like way — that it may or may not understand in the way that humans do, and it’s just a machine and it has limitations. When it says that, does it say that because that’s exact data that was used for post-training adaptation of ChatGPT, so that it’s going to interact with the humans in a more politically correct way? Or is it that it has genuinely acquired such self-awareness and introspection capabilities to realize that ”Oh, I’m a mere AI trained on human data”? I think if post-training data was transparent, a lot of those unnecessary hypes would be addressed.
And also, I think for the purpose of AI safety as well, I personally believe that more transparency is helpful so that we better understand where the limitations are, where the flaws are.
STROGATZ: Is this something that governments need to impose on the big companies? Is that what you’re suggesting?
CHOI: Probably there should be more government involvement in thinking about AI policy. It’s a very important topic that one needs to address very carefully, though, because I can totally also imagine a policy that just slows things down unnecessarily without actually adding much, depending on how it’s implemented.
So it’s an effort that requires really a broad range of community involvement. And also, there needs to be effort to increase AI literacy across people outside AI — including policymakers, but even for daily users of AI so that they understand really what the limitations of these models might be, so as not to over-trust.
STROGATZ: Well, I’m reminded of a time long ago when genetic engineering was new, and a lot of biochemists and molecular biologists on their own got together to make, sort of, police themselves about what kinds of experiments they would conduct or not allow themselves to conduct.
I wonder, is that something — rather than having the governments do it, do you think maybe the community itself should be coming together, including the big companies? Do you think that’s the way to go?
CHOI: In general, I think, there’s collective efforts where people of all sectors have a way of contributing what AI should and should not be, in some high-level declarative sense. We probably all agree that AI shouldn’t be used to develop bioweapons or AI should not propagate racism and sexism. But then there can be more gray zones, and we then need to think about what to do with those gray zones.
STROGATZ: What do you see as the biggest dangers in this space right now? What do you realistically think we should be worried about?
CHOI: I think there’s a lot to worry [about], especially in the near term, like misinformation, increasing use of AI for generating fake media to support a particular political party. That’s one thing, but actually also seemingly benign use cases, such as, you know, people faking their social media feed, might have more long-term consequence in the way that people generate and consume social media content.
You know, by the way, I used to think the internet is the byproduct of human intelligence, but that may not be the case in the coming years. Because too many people use ChatGPT for all sorts of their writing jobs, I hear. There are even some papers that start using ChatGPT and then the authors were not diligent enough to remove when ChatGPT says, ”Oh, it’s an AI model,” blah, blah — at least they should have done that.
STROGATZ: I shouldn’t laugh. Yeah, no, it’s serious. But then again, I mean, let’s be real. I have a colleague who’s a very good and honest person. English is not his first language, and he has told me that he uses ChatGPT to improve the grammar of the abstracts for his papers. You know, that seems like a fairly benign use. He’s written the abstract. It’s just sort of like a writing coach helping him. He’s not really providing new ideas. So these things can be good tools if used properly.
CHOI: Certainly, yeah, it could help people to learn a language faster. It can help as a writing companion, if used correctly. But it’s going to have unwanted side effects on humans as well. I do wonder whether — it may be OK in the end — but I do wonder personally whether it’s going to somehow make measurable changes in the human capabilities of writing and reading comprehension or not in the longer term.
STROGATZ: Well, so just to wrap up: The one thing that we often like to ask our guests, since our show is called “The Joy of Why,” is to talk about the emotional side of being a scientist yourself. Is there something in your research that brings you special joy?
CHOI: Oh, yeah. Great question. A lot of this research is joy. These questions about like, are there limitations in ChatGPT and if so, why? Why does it work so well, based on just reading internet text? Seeking answers to these “why” questions. I don’t really know why, but it does give me a lot of pleasure. And perhaps that’s one of the differentiating factors compared to human intelligence and ChatGPT. That we ask why.
STROGATZ: Yes, we do ask why. And thank you so much for helping us understand why. We’ve been speaking with Yejin Choi. It’s been delightful to have you here with us today. Thanks so much.
[Theme plays]
CHOI: Thank you. It was so fun.
[Theme continues]
STROGATZ: Thanks for listening. If you’re enjoying “The Joy of Why” and you’re not already subscribed, hit the subscribe or follow button where you’re listening. You can also leave a review for the show — it helps people find this podcast.
[Theme continues]
“The Joy of Why” is a podcast from Quanta Magazine, an editorially independent publication supported by the Simons Foundation. Funding decisions by the Simons Foundation have no influence on the selection of topics, guests or other editorial decisions in this podcast or in Quanta Magazine.
“The Joy of Why” is produced by PRX Productions; the production team is Caitlin Faulds, Livia Brock, Genevieve Sponsler and Merritt Jacob. The executive producer of PRX Productions is Jocelyn Gonzales. Morgan Church and Edwin Ochoa provided additional assistance. From Quanta Magazine, John Rennie and Thomas Lin provided editorial guidance, with support from Matt Carlstrom, Samuel Velasco, Arleen Santana and Meghan Willcoxon. Samir Patel is Quanta’s editor in chief.
Our theme music is from APM Music. Julian Lin came up with the podcast name. The episode art is by Peter Greenwood and our logo is by Jaki King and Kristina Armitage. Special thanks to the Columbia Journalism School and Bert Odom-Reed at the Cornell Broadcast Studios.
I’m your host, Steve Strogatz. If you have any questions or comments for us, please email us at [email protected]. Thanks for listening.
[Theme fades out]
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Quanta Magazine – https://www.quantamagazine.org/will-ai-ever-have-common-sense-20240718/