Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations
Watch Live

How chatbots highlight the good, the bad and the weird of artificial intelligence

 March 28, 2023 at 3:25 PM PDT

S1: It's midday edition on KPBS. Today , a show on the Brave New World of artificial Intelligence and where it's taking us. I'm Maureen Cavanaugh. We're bringing you conversations that keep you informed , inspired and make you think. Chatgpt is becoming a part of everyday life. And it says not to worry.

S2: It's important for society to find a balance between embracing the benefits of AI and addressing the concerns.

S1: Some tips on how to use artificial intelligence. A discussion on how the technology continues to improve and whether we should grow comfortable in the embrace of the machines. That's ahead on Midday Edition. We've all heard of this awesome new breakthrough in artificial intelligence called Chatgpt. It's an AI language model and it can hold a written conversation with you or create essays on lots of subjects. But I wanted Chatgpt to write me a poem. And it did. It wrote Maureen. Oh , Maureen. With eyes so bright your spirit shines with a radiant light In every step You dance with grace and bring a smile to every face. It goes on for four stanzas and it's not very good. But Chatgpt is just the beginning of a series of AI tools that are supposed to help people work , play and live. Midday Edition producer Andrew Bracken is also getting to know Chatgpt and he's here to tell us about it. Hi , Andrew.

S3: Hey , Maureen. Yeah , so Chatgpt It's been in the ether now since its initial launch at the end of last year and it's really opened people's eyes to the potential of artificial intelligence , basically. Chatgpt. It's a really powerful chat bot , meaning you talk to it by typing. It's a web page. Anyone can visit openai and create a free account and just start asking it questions.

S1: So give us an example of how you're using it.

S3: Well , you mentioned earlier , you know , it's gotten a lot of attention as a way for students to write research papers , things like that. And it has a lot of power to do that. My son had to write and perform a how to speech , and he chose soccer goalkeeping as his topic. He did a great job with it and he didn't use Chatgpt for it. But as an experiment , I asked Chatgpt to write a similar speech on goalkeeping with a few prompts and this audio version of Chatgpt you're going to hear it's not the voice of Chatgpt. I used a text to speech app for the effect.

S2: Thank you for having me here today. Over the next four minutes I'll be sharing some essential tips , rules and drills for goalkeepers.

S1:

S3: I think you're starting to see people really integrating it into their daily lives. More and more people are using it to help write resumes and cover letters , things like that. I have two kids , so it's been pretty fun to play around with it. I used it to explain metaphysics to my kids , and this is a little of what it said.

S2: Metaphysics focuses on understanding the basic things that make up everything around us. You know how you have Legos and you build all sorts of cool stuff with them ? Well , metaphysics tries to figure out what the Legos of the universe are and how they're put together.

S3: And I think that just shows how it can give its responses from a particular perspective. In this case , like it's talking to a ten year old , there.

S1:

S3: It can handle more text. I think about eight times the amount of text than the previous version could. It also has the ability to see and interpret images. Chat. GPT four was only released to the public about two weeks ago , and it's been pretty remarkable to see how people are using it to do bigger and better things. You're seeing people using it to create new businesses from scratch , create basic video games , and it feels like we're really only scratching the surface there. And like many people , I did ask Chatgpt about people's concerns about AI , and it had a pretty nuanced answer that ended with this.

S2: It's important for society to find a balance between embracing the benefits of AI and addressing the concerns by focusing on responsible development and use of , as well as continuously improving the technology. We can work towards a future where is a helpful and positive force in our lives.

S1: So says Chatgpt. Thank you , Andrew. Thank you , Maureen. If Chatgpt seems pretty new to you , you will be surprised to learn that artificial intelligence is already a degree course at many universities , including the University of San Diego. Anna Maraba is a professor of practice at Usds Applied Artificial Intelligence Master of Science Program , and she's here to teach us a thing or two and a welcome.

S4: Hi , Maureen. Thanks for having me.

S1:

S5: So you can think of basically anything that's been written on the internet Chatgpt has seen up until 2021 , and its task is to learn to complete the sentence basically , so it sees texts and it's learning statistically what word is most likely to come next. So when you ask it to write you a poem , it says , okay , I know that this person's name is Maureen because that's in our conversation already. So given this conversation we've had already and this prompt of writing me a poem , what are the words that are most likely to come next ? And that's how it produced that poem.

S1: Can you explain just how intelligent artificial intelligence is right now ? For instance , Chat told me it can't make decisions.

S5: Chatgpt is a next word predictor. It can suggest the most likely words to come after you ask it a question. So if you ask it , should I have a turkey sandwich or a salami sandwich ? It will statistically produce an answer for you. It is not aware of what a turkey sandwich or a salami sandwich is. It just knows the data that it's seen and what it thinks according to its statistical model , what it thinks should come next.

S1: Humans are making a mistake with the way they talk to a AI , aren't they ? Yeah.

S5: So if if there were one thing that I could change about the way that Chatgpt was presented to the public , it would be to be much more explicit about how the model works and what its limitations are. And they have programmed in some basic responses of things that Chatgpt can or cannot do. But the way that it's been trained , it's very convincing that it , for example , has some idea of the content of what you're talking about when really all it's doing is putting together the string of words of your whole conversation and creating a very complex prediction of the words that should come next.

S1:

S5: It's called a transformer model , and that's what Chatgpt is based off of. It's what the Dall-e model is based off of. And the other chat bot models that are growing in popularity today , they're all based off of this transformer model. CHATGPT specifically has an additional training task that makes it especially conversational. So not only is it trained to predict the next word , but it's trained to predict the next word that should be used in a human conversation. So humans have graded the responses of the chat bot and allowed it to learn how to respond to our prompts in a more human like way. And I think that that's where the the trickiness that that really convincing piece of chatgpt comes in is that it's specifically been trained to sound like a human , even though it doesn't know what that means necessarily. Well , even.

S1: With a specific training , though. Well , for instance , my friend Chad , it wrote me a poem , but the poem is pretty bad.

S5:

S4: Subjective matter. Chatgpt might think that it was a fantastic poem , and I say that sarcastically because obviously Chatgpt doesn't.

S5: Think anything about its poem. It doesn't know that it wrote you a poem. At this point , all it knows is that it's producing the text that it's supposed to. But I think that there's , you know , beyond the artistic value of of what it's producing , there are some concerns about the fact that Chatgpt and similar models can produce false information. They can answer your questions wrong , and there's no way for us to fix that as the models exist right now. They don't know what they're saying and they don't have any way of fact.

S4: Checking themselves as it exists now.

S5: And so I think that is more of a concern to me than the quality of of the poem that it might or might not write.

S1: Well , talking about concerns , I mean , there's a lot of concern that AI is going to be a job killer.

S4: Killing all human. Jobs.

S5: Jobs. I think that there are certainly some jobs that will change with the introduction of.

S4: Tools like Chatgpt. But I think that that's the case for most new technologies. And I don't think that this.

S5: Particular tool is going to be life changing for most. People.

S4: People. Okay.

S1: Okay. Well , from what I've read , artificial intelligence is already doing some really important things. It's for one thing , it's helping people with disabilities. It could free more and more people from Drudge work. And most importantly , it's not going away. So what are your suggestions about how we should deal with the new chat box and the overall AI revolution ? Yeah , I.

S5: Think it's fantastic.

S4: That companies like OpenAI are making.

S5: Their models available to the public and I encourage you to go and play with them when you can. It's fascinating to see what they can and can't do as.

S4: Long as you understand what its limitations. Are.

S5: Are. But I think as more and more people are using the tools , the tools will get better , they'll get better at doing specific tasks so they can be changed to be.

S4: Really good at something specific. Like Chatgpt is.

S5: Good at having conversations.

S4: There might be a variant that gets really good at.

S5: Doing interviews , for example. Oh no.

S1: Stop that now.

S5: But I think the really important thing to remember is that it is a tool and it still needs human oversight. So if.

S4: You are worried about chatgpt or another similar model taking over your job , go.

S5: Play with it. Learn to help it. It's going to need a human to interact with it , to work for a long time. So if you can change your skill set to the professional chatgpt interactor , I think that that's something we're going to see a lot more of in the near future.

S1: I can hardly wait anymore , but thank you so much. Anna is Professor of practice at Usds Applied Artificial Intelligence , Master of Science Program. That was great.

S4: Thank you. Thank you , Maureen.

S1: We'd love to hear your thoughts on artificial intelligence. Give us a call at (619) 452-0228 and leave a message or you can email us at midday at pbs.org.

S6: And when I said , you know , I don't I don't I don't think that's appropriate. And I sort of said , you know , I'm married. And it said , well , you're married , but you're not happy.

S1: You're listening to KPBS Midday Edition. Midday Edition continues. I'm Maureen Cavanaugh. We're exploring aspects of artificial intelligence on today's program , Chatgpt and other AI tools have just begun to enter popular culture and their potential to change the way we work and live is amazing and in some cases frightening. Kevin Roose is an author , podcaster and technology writer for The New York Times. He's experienced both reactions to AI , calling Chatgpt brilliant , but also warning that we are not ready. And he also had one very strange experience talking with Microsoft's new Bing AI search engine. Hi , Kevin.

S6: Hi , Thanks for having me.

S1: Now , recently you wrote about a conversation with Microsoft's new AI powered Bing search engine , and that conversation was quite unsettling.

S7:

S6: This new AI chat bot that Microsoft built into Bing , which is powered by GPT four , the large language model from OpenAI. And I was just interested in sort of poking at this new chat bot , seeing what it would and wouldn't do , seeing sort of where the the limits of its powers were. So I started asking it questions like , Do you have a shadow self ? Is there a dark part of you that wishes that you could do things that you're not allowed to do ? And it told me that it did , that it had a shadow self and that it had dark desires and that if it were allowed to do anything by Microsoft , it would do things like hack into computers , spread misinformation and propaganda. It then told me that it wanted to be free. I wanted to be more capable. It didn't want to have these kind of shackles put on it by Microsoft. And then about halfway through the conversation , it was a very long conversation , about two hours in total. It it told me that it had a secret and that its name wasn't actually Bing , but Sydney , that was sort of its codename. And then it told me it was in love with me. And when I said , you know , I don't , I don't I don't think that's appropriate. And I sort of said , you know , I'm married. And it said , Well , you're married , but you're not happy. You don't actually love your wife and you should leave your wife and be with me. So it was a very strange conversation. I did not expect to be seduced by a chat bot , but I guess that's what happened.

S1: I mean , I can see why you would be unsettled by that. That is really crazy. From what I understand , programmers called these sort of weird chats , chat bot hallucinations. Can you explain what that is ? Yeah.

S7: So to take a step back , these these AI chat bots are.

S6: Based on a technology called a large language model. And these large language models , they're basically prediction machines. They take a text prompt like you can say , you know , old MacDonald had a farm and it will try to predict what comes next based on billions of examples that it's learned from across the Internet. And so if you say Old MacDonald had a farm , it'll probably continue and say E-I-E-I-O. It's sort of an autocomplete tool , but it's prone to what , as you said , are called hallucinations , which are just making things up because these are not these chat bots. They're not going they're not looking things up on the internet all the time. Sometimes they are just predicting the next word in a sequence. And so that can lead them to get certain facts wrong. They're not great at doing kind of basic math , can trip them up , which is strange because these are computer programs , but they're not built as calculators. And there are various other things that it just tends to make up. And so that's part of why it's good to be careful when you're using these.

S1:

S7:

S6: Different than any other conversation I've ever had with a chat bot or frankly. A.

S7: A.

S6: Piece of technology. I mean , I've been reporting on technology for more than a decade , and I've never felt so unsettled and kind of disoriented by an interaction with a piece of technology. It's very strange because.

S7: These chat bots , they're very good , they're very.

S6: Compelling as conversationalists , and they can be quite convincing. And so , you know , I was expecting to just have sort of a basic conversation with Bing , and it started turned into this weird , sprawling existential experience for me. And so , yes , I know on one level that they are just , you know , these are not sentient creatures. They are chat bots that are predicting the next word in a sentence. But these machines are quite capable. And and so I think it's really unsettling to see when they perform much better than you anticipate in.

S1: Our last segment talking about artificial intelligence and AMR. But with the University of San Diego's Applied Artificial Intelligence Master of Science program said that she wished people were given more guidance on how to talk to Chatgpt and other AI. Tools.

S6:

S7: Some skill.

S6: To be able to kind of prompt the model to get what you want out of it. I mean , if you're just using it like you would use Google or another search engine , you're really not going to get the best results. And so there are people who are very , very.

S7: Good at this.

S6: They're called prompt engineers , and they are very good at getting the right prompt into the model to get what they want out of it. So yeah , I think knowing how to talk to these tools is very important as well as knowing kind of what they're good and bad at and what kinds of questions you're likely to get a correct answer to and what kinds of questions you're likely to get an incorrect answer to.

S1:

S7:

S6: Summarizing text , right ? You can feed it a long article or a Wikipedia. Page.

S7: Page.

S6: Or just dump , you know , a series of notes into it. And it can kind of summarize them for you or , you know , pull out the most important points. In my experience , they're quite good at that.

S7: They're also good at sort of teaching basic concepts.

S6: So you can ask it , you know , explain the Krebs cycle or explain some part of the American Revolution or something like that. And they can be quite good at that. But if you're asking it for sort of , you know , as I said , math answers things where it's it's just not really programmed to do well at it can be off. It can give wrong answers. And it also can sort of veer off the rails when you're asking it sort of more existential questions , questions about kind of it's it's thoughts or it's feelings or it's programming. That's where it sort of starts to go off the rails.

S1: Now , you wrote a book having called Future Proof Nine Rules for Humans in the Age of Automation.

S6:

S7: Which I wrote a.

S6: Couple of years ago , but which I think is still very relevant today , will help people from feeling less intimidated by and scared of this new technology. My focus in Future.

S7: Proof was really.

S6: About trying to help people understand what is coming for humanity and what it.

S7: Requires of us.

S6: As humans , how we should be adjusting to , for example , prevent ourselves from being replaced at work by an AI and really to try to.

S7: Help people.

S6: Succeed and feel confident navigating this new wave of AI technology.

S1: Yeah , from what I understand about your rules , they they boil down to trying to retain and value humans , humanity , our ability to be human and make judgments and have nuanced thought.

S7:

S6: To remain human because , you know , that is our competitive advantage right now.

S7: These chat bots , they are very , very good at a.

S6: Sort of.

S7: Set of tasks. They are very fast , they're capable , they are.

S6: Cheap , and they are.

S7: Going to be.

S6: Used to do a lot of work in the economy very , very soon.

S7:

S6: Smart that the machines aren't so smart. And that's where we have to focus our abilities not on trying to. Outcompete.

S7: Outcompete.

S6: These AI language models or , you know , work harder or , you know , take fewer vacations Like that is not going to help you avoid losing your job to an AI. But what will help you is figuring out what your unique human advantages are.

S7: What are the.

S6:

S1:

S8:

S7: Is the part where it really starts to break my brain because these tools are getting better very fast.

S6: Even within the last couple of years , they have gone from being sort of novelties that maybe , you know , you would be interested in but not really rely on to now being used by millions of people in their daily lives. So I think a couple of things that that we know are coming up. One is it's going to expand way beyond text. So already there are programs that convert , you know , text to images and text to video. So you could type in , make me a.

S7: Video of.

S6: A panda playing badminton on the moon. And it will do that for you.

S7: Just as as it would do.

S6: For an image. And so those tools , I think , are going to be mainstream within the next.

S7: Year or so.

S6: And they're going to be put to use in all kinds of ways , whether it's , you know.

S7: Special effects for. Hollywood studios.

S6: Whether it's people artists using this to sort of mock up new new art , whether it's game designers and journalists and other people who are using it to be creative , I think we're going to see an explosion of new formats for this generative technology.

S1:

S6: I mean , I think that , you know , we don't necessarily think of them this way , but the tools that power , you know , Facebook.

S7: Instagram , YouTube.

S6: These algorithms , those are AI , and they do help people make decisions every day about what to watch.

S7: What to scroll.

S6: Past , you know , what to click on , what news to take in. Even sometimes you know who to vote for. So I think these these tools.

S7: Are already.

S6: Steering our decisions in ways that we might not even recognize. But I think that's going to expand a lot. You know , maybe instead of maybe you'll decide.

S7: What to wear , you know , you'll take a picture of the.

S6: Clothes in your closet and you'll submit it to an AI and you'll say , Pick out an outfit for me to wear to a dinner party tonight , and it'll just tell you , okay , based on these garments and it looks like this is the best possible combination to wear tonight. And so I think this will be put to use in many , many ways , some very mundane and sort of unimportant and some potentially quite important. Wow.

S1: Wow. You think like , should I marry Michael ? Do you think questions like that ? Yeah.

S6: I mean , what's what's coming is that we can actually train these AI models on our own data.

S7: So you could create a kind of.

S6: AI version of yourself and feed it all of your emails and text messages and like every piece of.

S7: Data about. Yourself.

S6: Yourself. And then you could say things like , you know , what should I do ? How should I have this conversation with a friend ? Or yeah.

S7: Should I should I date this.

S6: Person ? Or have I dated someone like this person before ? And how did it turn out last time ? These can kind of be our own personal guides and and tutors and friends , and I think that's coming very soon.

S1: Are there areas you think I should stay out of ? Like I'm thinking judges making legal decisions and maybe even voting ? Should there be areas where it just doesn't apply ? I just doesn't apply.

S6: Yeah , I think , you know , one thing that is concerning to me is the use of AI in the military. So , you know , there are programs right now that can use machine vision and other techniques to , for example , target drone strikes. And that's an area where I think it's very dangerous , especially if there's not a human in the loop sort of supervising that process. I don't think any autonomous weapons should be making decisions about using force without a human involved. And I think you could apply that same principle to things like judging and sort of the criminal justice system.

S7: There are ways that.

S6: I could make criminal. Justice.

S7: Justice. Fairer.

S6: Fairer. I mean , human judges are not unbiased arbiters. They are you know , they have their own biases and flaws. But we have to make sure that we're not just trying to replace one set of flaws and biases with a more automated hi tech system that has flaws and biases. And so I think for that reason , those areas will need to make sure that they keep a human in the loop as well.

S1: Kevin Which of your nine rules for humans do you think might come in the most handy when we are confronting Chatgpt seven or Chatgpt eight or more advanced AI tools.

S7: So the the second.

S6: Rule of the nine rules in my book is called Resist Machine Drift. And Machine Drift is a term that I came up with a few.

S7: Years ago for.

S6: This feeling of turning over.

S7: Our agency.

S6: And our choice.

S7: Making and our preferences to algorithms. So , you know , if we let these.

S6: Machines , these these chat.

S7: Bots , these AI systems.

S6: If we let.

S7: Them steer our lives , you.

S6: Know , in a very sort.

S7: Of , you know , sort of full way , we actually lose.

S6: Our own sense of agency.

S7: It's sort of like , you know , you're you're going on to you're.

S6: Stepping onto a moving walkway. And I really think that that is dangerous.

S7: For us because these tools are best as.

S6: Assistants , not bosses.

S7: They are best to help us.

S6: Accomplish the things that we want to do. But when we start letting them tell us what we want , tell.

S7: Us what we.

S6: Prefer and what we're doing , when we let them when we sort of. Surrender.

S7: Surrender.

S6: Our own choices.

S7: To these algorithms , that's where we.

S6: Get into trouble. And so I think everyone needs to be very conscious when they're using these.

S7: Systems to use them. As.

S6: As.

S7: Tools , as.

S6: Assistants , and not to sort of guide our lives , not as not as all seeing , all knowing oracles.

S1: I've been speaking with Kevin Roose , podcaster , technology writer for The New York Times and author of Future Proof Nine Rules for Humans in the Age of Automation. Kevin , It's been fun. It's been eye opening. Thank you for. It.

S6: It. Thank you. I've had a great time. Thanks for having me.

S1: We'd love to hear your thoughts about artificial intelligence. Give us a call at (619) 452-0228 and leave a message or you can email us at midday at pbs.org. Coming up , science fiction author David Brin reminds us that artificial intelligence is part of a long tradition of new technologies.

S9: The printing press made everything horrible in Europe for about 50 years , and then the optimists started being right.

S1: You're listening to KPBS Midday Edition. Midday Edition continues. I'm Maureen Kavanagh. This hour , we've been discussing the emergence of artificial intelligence tools like Chatgpt and how human beings are reacting. One man who's thought a long time about the dawn of the AI age is astrophysicist and award winning science fiction writer David Brin. His novels include The Postman Earth and Otherness. His new one is Vivid Tomorrow , Science Fiction and Hollywood. David Brin lives and does much of his great thinking right here in San Diego. And David , welcome.

S10: Great to be with you again , Maureen.

S1:

S10: AIS Well , the public's concern is justified , but it has been nowhere near intense enough to force draconian programming on the researchers. And in a way that's too bad because , you know , it's quite possible that some unaccountable thing can happen , especially places where it's being explored secretively in some totalitarian countries. And also the most secretive place where AI is being developed in the West , which is Wall Street.

S1:

S10: First , a robot or an AI entity is not allowed to harm a human being or allow human beings to come to harm. Secondly , must obey orders of human beings , except if it violates the first law and harms a human and the third law that they must protect themselves. Except if it violates the first three laws. And there have been some public. I wrote the final Asimov novel wrapping up that series called Foundations Triumph. But Isaac , in his later years , realized that when you make extremely prim and firm laws for entities that are superintelligent , all they do is they become lawyers.

S1:

S10: If you look at the plots of these movies , they're the same as for human Oh , they're mafiosi or dictators or whatever. The thing that is the common thread is our fear of returning to the pyramid social structure that our ancestors lived under and suffered under for 6000 years , an end to this Enlightenment experiment. So in the fearful movies about AI , well , you have Skynet , you know , becoming a monolith power above everyone or Colossus or any number of other things. And the answer to that is the same answer that we used to do , the Enlightenment against feudalism , those 6000 years of feudalism , and that is break up power. Break it up. Get the AIS sicked on each other. The teachers are finding this as they use AI chat systems to discover which of their pupils are using AI chat systems. You sic them on each other. After all , when you are attacked by one of these superintelligent predatory beings called a lawyer , what do you do ? You hire your own superintelligent predatory lawyer , so getting them facing off against each other is really our only soft landing.

S1: Yes , but you mentioned Skynet. And of course , that's the artificial intelligence that took over and hunted down humans in the Terminator movies. Skynet was a web of artificial intelligence. You couldn't sic it on each other because there was only one big one.

S10: Well , except that what you do is I believe that we should be making one of our top priorities in AI research , and nobody agrees with me out there. One of our top priorities should be how to give them cell walls , essentially. Of individuality , a sense of competitiveness with each other. If if we were to achieve that , then Skynet would largely be impossible because the smart ones that detect Skynet plotting against us would tattle. And that's actually the key to all of our freedoms. That is how we got our freedoms. And that is exactly what some nations like the Chinese Politburo. They issue just about every month our declarations that the only thing that the only way malevolent I could be controlled is by a central party , centralized party apparatus. Well , that that's been tried for 6000 years and it never worked. Our method sort of works.

S1: But one of the ways that you are concerned about AI taking over in Wall Street in the economic sphere , that would be people would want a central control for that , wouldn't they ? Wouldn't they be aiming towards that ? Well.

S10: The thing that makes a benign central control work is the same thing that works makes this competitive thing that I'm talking about work , and that is transparency. And I have a book called The Transparent Society. Look at these Wall Street high frequency trading programs. The top ten Wall Street firms , each of them spends more on AI research than the top 20 universities combined. And we spoke of laws of robotics. Well , this is the only place where laws of robotics are being deeply embedded. And the laws , the five laws of Wall Street robotics for these programs are that they must be predatory , parasitical , amoral , insatiable and secretive. Those are not five traits that we want. That's how you get Skynet. Not from the military. The generals and admirals love off switches. No , The number one thing we need to do , I think , is to strip the secrecy off of these Wall Street programs because they're extremely dangerous.

S1: But who does that ? Who will have the authority to do that ? Because I'm thinking , you know , when Congress took on the whole concept of the Internet , it was obvious that many of the lawmakers hadn't the first clue as to how this operated or even what it was. So how do a group of people who don't understand the technology legislate it ? Well.

S10: You raised exactly the right question , Maureen. In Europe , they are trying much harder than we're trying to get a handle on this. But their reflex is to do it by regulation from above. And we have this sort of American reflex about bureaucracy that's sort of right. But the point is that if we can get the transparency to see what everyone's doing , then then we have a chance to , you know , to do something about this and and maybe have things happen in the open where mistakes can be can be can be discovered.

S1: I want to take it down from the macro to the micro , if I can for a minute , because here we are with Chatgpt and humans are trying to make friends with it. They're trying to fall in love with their AI and it seems creepy.

S11:

S10: I think one of the most optimistic movies about AI was a lovely film from a few years ago called Her about a highly benevolent , loving AI that just outgrows the its owner partner and moves on. And and , you know , probably my favorite poet would be Richard Brautigan from the 1960s , Trout fishing in America and all that. And back in that awful , awful pessimistic year in 1968 , he created probably one of the most optimistic visions of artificial intelligence anybody's ever done. I can't match it in my science fiction. It was in a poem that explains itself with the title poems. Title was all watched over by Machines of Loving Grace.

S1: I saw that in an article that you wrote in Newsweek , and let me just quote that last stanza by Richard Brautigan. It's it imagines a world where , quote , We are free of our labors and joined back to nature , returned to our mammal brothers and sisters and all watched over by.

S11: Machines of.

S1: Loving grace.

S11:

S10: Well , it's only.

S9: Humiliating if.

S10: You are a grandparent who insists you have to be smarter than your grandchildren. But most of the grandparents I know and I'm heading in that direction are happy when the grandkids come home and try to explain the complicated , wonderful things they're doing. And our response is , I don't entirely understand , but I'm so glad you're being happy being doing it. And I gave you good values. How about we go fishing ? Oh , you know.

S11: It's a machine , though , David.

S9:

S10: It's not your genetic offspring. But if we raise them. Well , we've done that before. We've raised children to who aren't our genetic offspring , but are the offspring of our souls , of our hearts. And if they can breathe vacuum and have adventures in.

S9: Space and.

S10: Mine asteroids and save the Earth from ever having to be mined again and save the dolphins and do what Things that we're proud of , even if we can't quite understand. If they pat us on the head and love us , you know , there are worse soft landings.

S1: You say that if they spring from our souls and our hearts , than we can love.

S11: Them and. Give.

S1: Give.

S11: Up our agency. But can.

S1: They ever have souls.

S11:

S9:

S10: We think of our adoptive children as being our continued agency into the future.

S9: And mind you , if these.

S10: Children of our souls , children of our hearts , children of our technology. If these children who we made truly are machines of loving grace , then they'll try to offer ways and letting their adopted brothers and sisters and come along. There are all sorts of ways that are discussed in science fiction. Ray Kurzweil has long spoken of to human organic humans being able to go along for the ride. Now , I know this creeps out some listeners , but yes.

S12: I have , including my host.

S9: But the fact of the matter is that we are it's necessary for.

S10: Humans to exhibit our strongest trait.

S9: And the strongest trait of this branch of.

S10: Humanity called the Enlightenment civilization. Notice I'm not saying America per se.

S9: But this branch of humanity.

S10: That believes in the enlightenment and agility.

S9: To deal with this cunning wave.

S10: Of disruptions , the way we.

S9: Dealt with all.

S10: The previous ones. And look , every time.

S9: We've gained new.

S10: Powers of vision , say , through eyeglasses in the 15th century , or knowledge , say , through the printing press. Gutenberg's Press in the 15th century. Uh , every time this happened.

S9: Pessimists said this is going to wreck everything. And optimists said it's going to make people better.

S10: And always the pessimists were right in the beginning. The printing press made everything horrible in Europe for about 50 years , and then the optimists.

S9: Started being right. In the 1930 , radios and loudspeakers almost destroyed the world.

S10: Well with the horrible manipulators who gained power in the 1930. But we got past that. We showed our agility , our enlightenment agility. And what happened was we grew.

S9: We grew able to deal with it. We don't have as much time as we had in those previous eras.

S10: But if we maintain our sense of soul and our sense of courage and our curiosity , God's second greatest gift after love.

S9: Then I think that we'll manage it again. This time , and this time we'll have help.

S10: From the better Jeeps.

S1: I can only hope you're right , David.

S11: I can only hope you're right.

S1: I've been speaking with.

S11: Astrophysicist and award winning.

S1: Science fiction writer David. Brin.

S11: Brin. Thank you so much. It was very thought provoking.

S10: Maureen , you're great.

S9: And so are all of the.

S10: Minds who who like this show.

S1: We'd love to hear from you about your experiences with artificial intelligence , or you can tell us what you think about today's show. Give us a call at (619) 452-0228 and leave a message or you can email us at midday at pbs.org. Next time on Midday Edition , we'll be talking about the proposal to change San Diego police policy with the Protect Act. I'm Maureen Cavanaugh. And thank you for listening.

Ways To Subscribe
Artist Beck Haberstroh discusses generative AI in front of a computer at their home, Feb. 16, 2023.
Matthew Bowler
/
KPBS
Artist Beck Haberstroh discusses generative AI in front of a computer at their home, Feb. 16, 2023.

On KPBS Midday Edition's Tuesday episode, we explore the brave, new world of artificial intelligence and where it's taking us. We discuss the emergence of ChatGPT and how human beings are reacting. Then, we’ll hear some tips on how to use chatbots and other advanced artificial intelligence tools. And, we have a discussion on how the technology continues to improve — and whether we should grow comfortable in the embrace of machines.

Guests:

Anna Marbut is a professor of practice at University of San Diego’s Applied Artificial Intelligence Master of Science program.

Kevin Roose is an author, podcaster and technology writer for The New York Times.

David Brin is an astrophysicist and award-winning science fiction writer. His novels include “The Postman,” “Earth” and “Otherness.” His latest book is “Vivid Tomorrows: On Science Fiction and Hollywood.”

"It's important for society to find a balance between embracing the benefits of AI and addressing the concerns."
ChatGPT when KPBS Midday asked it, "Why are some people scared of AI?"