Play Live Radio
Next Up:
Available On Air Stations
Watch Live

One year in, California leads advances in AI

 November 7, 2023 at 11:10 AM PST

S1: It's time for Midday Edition on Kpbs. Today we're talking about the rapid advancements in artificial intelligence and the ethical concerns. I'm Jade Hinzman. Here's to conversations that keep you informed , inspired , and make you think. We'll explore how AI is impacting our lives and what's possible in the future.

S2: And in that sense , generative AI has truly captivated people's attention in terms of what is the art of the possible now.

S1: Plus , we'll talk about the ways it could change how we work and whether its use is for the good or the bad.

S3: There are a number of different industries that are radically changing in the face of various types of AI , particularly generative AI.

S1: That's ahead on Midday Edition. It's been nearly one year since ChatGPT released and opened our eyes to a new world of technology. The world of generative AI made big leaps this year , much of it coming from companies in California , some in San Diego. ChatGPT and tools like it allow for human like conversations , opening up new possibilities for life and work. Meanwhile , tech companies continue to expand the limits of AI , but along with the rise of this new technology also comes concerns about AI's potential power. Here to talk more about it is Durga mellody , Senior vice president at Qualcomm. Durga , welcome.

S2: Really glad to be here.

S1: Glad you're here. Also Darby Vickers joins us. She is a professor of philosophy at the University of San Diego. Derby. Welcome to you. Hi.

S3: Thank you so much for having me.

S1: Thanks for being here.

S2: In that sense , what started off as chat bots or you know what really caught everyone's attention with ChatGPT , where people would ask questions and get responses which are quite amazing , to be honest , just looking at the depth and the quality of the feedback that comes back. Possibility of doing all of that sort of AI based use cases in devices that you and I use on a daily basis. It's quite something. And so in that sense , we believe that it's a truly transformative moment in the industry from an overall generative standpoint. Lots more to talk about it.


S2: In fact , it dates back all the way to the 1950s. And for those of us who've been following the history of AI in terms of what it has done over the decades , it's gone through different moments where it's made some pretty spectacular promises , then coming down in terms of some of the more realistic expectations in the industry. But I believe that in the last ten years or maybe 15 years or so , it's truly started to make a difference in terms of people's lives. Every time you use your camera in a smartphone , for instance , edit your pictures , take much better quality pictures , make any kind of other modifications. Turns out we actually use AI for that. Whenever we have any kind of a voice assistant that we've used in the last five years or so , turns out that we've been using AI there as well. But what really changed people's perception about AI is a more recent phenomenon , which is about generative AI. Now , generative AI in that sense , just by the name itself , says that using AI. They're able to generate new things. This could be new images , new text , new voice samples that haven't really been seen before. This is not about data that already exists. It's truly creating something new. This is brand new data that comes across. And in that sense , when you ask a question , for instance , you get a response which is specifically tuned towards the question that you ask and that uses generative AI. So that's just language processing. And that's what caught people's attention to three years back. But today when we talk of generative AI , this is not just about getting answers to a question that you ask , but it can generate anything you can ask for a specific picture or a portrait in a certain style , and that's what gets generated. You might take a picture of yourself and say , can I have the same picture in a monet style painting that gets generated ? It's brand new. It's never existed before. When you ask a question which is on , send me the meeting notes that I took from two weeks back with a specific person and create and draft that into an email. That's what you get as an answer. These are some pretty powerful use cases , which span both consumer use cases to productivity use cases. And in that sense , generative AI has truly captivated people's attention in terms of what is the art of the possible now ? And from that standpoint , you know , the natural next step to start thinking about is where else can generative AI make a difference ? And turns out that today the possibilities are endless. From medicine to education to internet of things in your smartphone , making it into a truly an iPhone and of course , into automobiles with autonomous driving. Wow.

S1: Wow. So Derby , there's a lot of uses it sounds like for AI , but as a philosophy professor , your focus on AI is not on the development of the technology , but more about some of the potential ethical implications that come with it.

S3: And as a lot of people have argued. The technology itself can be used in a variety of ways , for good or bad. And not only that , but the development of the technology can be done well or poorly in terms of the types of outcomes that it generates. So , for example , a lot of the uses of artificial intelligence right now involve models of risk assessment that can be used from anything from thinking about whether or not you should get a mortgage , to thinking about whether or not you should be allowed out on bail and what kind of bail should be set like so what your risk of recidivism is , and all of these have an incredible impact on your life because they're making real decisions that , of course , a person is implementing. But the AI is making a decision about something that's going to greatly affect you. And so the challenge here is are those models accurately assessing the risk. And if they're not. What kind of recourse do you have to deal with technology like that. In addition , there are all sorts of ethical implications about the way that this technology is being developed and the algorithms are being trained. Yeah.

S1: I mean , you know , aside from the the biases that I can carry , hopefully I can evolve beyond that , but even more so I'm wondering what your thoughts are on how I can really help humanity because , you know , I mean , it's your thought that capitalism could stand in the way of that. Absolutely.

S3: Absolutely. So the there are some really incredible use cases of how AI technology can be incredibly helpful. So in environmental conservation , you now can get AI to recognize individual animals for biodiversity statistics off of a camera that's set up. And you don't have to have a grad student putting in , you know , hundreds of hours trying to catalog the individual animals so you can do that kind of work. In addition , I can be used to be incredibly helpful for , say , reading signs to people who have visual impairments. And there are all sorts of sort of incredible cases that can be used to help people out. But the economic incentives just aren't there in our current economic system for companies to try to be developing those sorts of great use cases of AI. Rather , the economic incentives tend to be in , you know , things like these large language models. There have been huge numbers of deals that have been made. There was an $11 billion deal made between OpenAI and Microsoft to bring some of the generative AI into the Microsoft Office platform , and the problem is that there's a lot of testing that still needs to be done that people aren't , you know , considering Microsoft , there was a news article that said that they got rid of or sort of reshuffle a lot of their ethics team right before they made that decision. And there's so much money at stake in some of these kinds of deals that it's really difficult to figure out whether those incentives are in the right place to get AI moving us in the right direction. Now , I have high hopes that we can do incredible things with this technology and really help people , but there's going to have to be both a cultural change and a change of the way that these incentives are functioning in order to create AI. That's going to make the world a better place.


S2: So let me actually talk about two different aspects to it. So the first one is a true democratization of where we have generative AI running directly on devices. You don't have to send all your data to the cloud. You actually directly run it on devices. We reached a point in time where we can run very large models , not just language model , but also other kinds of models , but very large models directly on device. So you get the same benefits. This goes anywhere from device skills that we use. For instance , these could be our smartphones. We talked about smartphones and laptops , but also IoT devices like home security systems and doorbells and water meters and gas meters and whatnot. Now , when we think about running generative directly on device , it means that the data stays right there. So we address things like privacy and security , especially when you start thinking of some of the other use cases , that this could be medical data , that you might want to have very specific usage for. It's easier to build the right kind of guardrails while still getting the benefits from some of these use cases that we talked of when we talk of something as complex. A device as a smartphone. Truth be told , it's actually not one of the you know , there is a large cross-section of the population who sometimes find it very hard to to use the smartphone with its full capabilities. And I was just talking about this , and there was someone who mentioned the the fact that the visually impaired , for instance , are simply unable to use smartphones. And the way you and I use and using generative AI. In fact , voice becomes the most natural interface to some of these extremely complex devices. Two weeks back. Usually we have a summit , which we call a Snapdragon Summit. We showed how you could make a airline reservation by simply speaking to your phone. You just say the fact that I want to go from San Diego to D.C. , for instance , on a specific day , you're just talking to it. You're not actually opening up any app , and it makes it happen for you. Pulls out all the capabilities. It's something I'm sure a lot of notches visually impaired as an example , but a lot of elderly people can use to as well. So it truly brings about a democratization of AI with it , you know , being used by a very large number of people. The other part that has come up is , you know , what about biases and what about other kinds of things that you have to handle ? It's also important to understand that , you know , when we talk of these large models that are trained on data , they call us foundational models. Typically this is data that's publicly available out there. It's a starting point. But then these models , these foundational models can get fine tuned with domain specific data. There's domain specific data that's available out there. And based upon that data the models can be fine tuned. Further , it's one way in which the overall issues that sometimes do come up with , okay , maybe how do we know that it's doing the right thing ? This is one way of actually doing that. So we believe that there are ways of addressing some of the concerns as we take steps going beyond foundational models into into fine tuning models.

S1: Coming up , we'll talk about how artificial intelligence is shifting industries and its impact on entry level jobs.

S3: I think this is making students really nervous about what the future is going to look like for them.

S1: You're listening to Kpbs Midday Edition. You're listening to Kpbs Midday Edition. I'm Jade Hindman. We are talking about artificial intelligence , and I'm joined by Qualcomm's Durga melati and Darby Vickers , professor with the University of San Diego. And you know , Darby , one of the the fears that people have when it comes to AI is it's possible and potential to make certain jobs obsolete. We touched on it before. I'm curious what you hear from your students who will soon be starting their own careers in this new world.

S3: My students are concerned there are a number of different industries that are radically changing in the face of various types of AI , particularly generative AI. There are significantly fewer jobs doing sort of entry level writing , for example , because it's relatively easy to get a large language model like ChatGPT to generate that kind of data for you. There's also an increasing concern among the software engineers that I teach , that a lot of the jobs that they are considering as entry level positions post-college will be gone someday as well , because ChatGPT and other large language models are , you know , still imperfect , but are doing relatively well at being able to generate computer code. And there's a concern that a lot of those sort of places where you learn how the industry works and sort of begin your coding career are going to disappear , and instead they're going to be sort of fewer jobs that are mostly reading and editing code , rather than generating the code as a human being. In addition to that , there are concerns from students who want to go into the arts , because generative AI is something that can create images and animations and all sorts of other things in the future. And there's a lot of concern , obviously , with the Writers Guild strike and there are the Animation Guild is working on a set of principles to try to figure out how they're going to deal with generative AI in the future. So there's a lot of concern out there. There are , of course , certain industries that are going to open up in a different way. I am under the impression that some of the sort of difficulties with understanding how to apply intellectual property law to things created by generative AI models are going to sort of open up a whole new field of intellectual property law in the future for dealing with these kinds of contributions. And there are fields like right now being a prompt engineer for some of these things , although this is probably a short term kind of position , and it's not going to be as effective once we get sort of different ways of interfacing with these generative models is a sort of field that's growing. But there's a real concern that has actually been around since the very beginnings of artificial intelligence in the 1950s that Norbert Wiener voiced really fabulously in a book called The Human Use of Human Beings. The original copy of this book was sort of way predated any kind of AI that we interact with. It was written in 1950 and then revised in 1954 , and although he got many things wrong , Wiener seems to see what exactly is going on with the stratification of jobs as AI becomes a sort of integral part of our daily lives. There are lots of people who create these systems and control those systems in their deployment who are making large , large amounts of money , which is true of a lot of the people that come out with expertise in this field. And then the AI systems are doing a lot of these sort of middle level jobs , particularly the intellectual jobs of generating certain types of writing. And then we have people that are sort of stuck cleaning up the messes that artificial intelligence is not good at cleaning up , in terms of figuring out how to make sure that data that is labeled correctly for supervised learning , or checking to make sure that each of the individual pieces are labeled correctly to train a Tesla , or dealing with trying to figure out ways to fine tune the algorithm for large language models to ensure that they don't generate sort of horrifying content to users that are inputting something and doing that vetting process. So all of these jobs are generally not very well paid , and they're sort of supporting these AI systems. And we're not having AI as was envisioned by sort of many people who have a utopian vision of AI that are going to do the jobs we really don't want to do , which are things like cleaning and , you know , picking strawberries and doing all the kind of backbreaking labor that people sort of envisioned in the future would be done by robots. So it's causing the labor market to shift in this really strange way that Wiener seemed to envision pretty clearly , and I think. I think this is making students really nervous about what the future is going to look like for them.


S2: So it started with things like cameras and computer vision , where engineers have spent a lot of time in the past in terms of doing some of the to work that's necessary for making picture quality much better and complementing that with with the kind of really improve the overall field of computer vision quite a bit. The same thing went off into audio processing , and we are seeing exactly the next evolution of that as we get into generative AI. A lot of the times , what we have seen in the tech industry in general is that as new technologies come in , they don't , you know , some of them gradually start by improving what has been done before. But at the end of the day , when you start building the use cases on top of it , it's quite phenomenal in terms of the difference that we end up making from a pure technology standpoint , even though we are talking of , you know , just in terms of maybe processes or algorithms , but you can actually take another field , which is the field of medicine and biotech in general , right here in San Diego. If you just take a look at how that field is changing , what used to take an extremely long time to generate , whether it's new kinds of medicine or proteins or vaccines. But any kind of , you know , if you just take a look at the amount of time it used to take to come up with new solutions , that's come down quite dramatically as the field of computational pharmacology has come through. So you're using a lot of techniques to complement what's already been done in other fields and coming together. The benefits are quite something. So from a tech industry standpoint , I see it as a technology , which is another piece of the puzzle that you end up using to improve daily lives. That's the way that we see it.

S1: And Durga , as someone actively working with this technology , what do you think we get wrong about AI ? And also , what do you see as solutions to some of the issues AI has , like biases , as Dabi mentioned ? Yeah.

S2: So I think one of the things that people , when you people get wrong about sometimes is the fact that when you kind of take a look at the full scope of what brings to the table and some of these segments that I mentioned , whether it's medicine , education , productivity and so on , it's quite something. There's a lot of people who benefit significantly from it. So the positive aspects of AI benefits sometimes don't necessarily are so visible unless you kind of go through it , but it's also probably a little bit of a it's a it seems like a very new technology , even though we actually have been living with it for a long enough period of time. So it's about gradually getting adjusted to , okay , this is how this technology is here to stay and what are the benefits that we get out of that. As people get accustomed to it , there's a certain sense of , okay , then this does have a positive aspect. It's not all negative because that's what usually people tend to think of it. But coming to the question on , okay , so what are we doing about some of these concerns , which are there about data sets and biases and so on ? There are several things in play over here. First , in academia and and from a regulatory perspective as well. In academia , there's tons of research. And as Qualcomm , we've been quite involved in research for a long period of time. For more than a decade. And we've spent some thought in terms of , okay , what are the right set of guardrails that come into play now ? It's kind of important to understand that , okay , you start with the basics of , well , where is the data coming from ? But there are really three kinds of players. When we talk of those who are in the business of bringing use cases into the market , those who create the foundational models , which means that they are the ones who originally trained the model based upon a certain amount of data set that was there. Those who bring that technology to life with the underlying processes and the software. That's someone like Qualcomm. We actually do that. And then those who bring the commercial products to the market. So this would be like a smartphone OEM who brings this to the market. So there are several roles that the three of us actually play to make sure that we have the right guardrails. But in academia , there's work in terms of okay , how do you make sure that what kind of. Tests and qualifications can you put in place so that you make sure that you address any of the concerns on bias and toxicity , for instance ? Mean there are there are guardrails that within academia there are actually already studies which indicate , okay , these are the following tests that can be done. We've also seen in some regions , for instance there are additional tests that are usually there. So before you bring a device to the market you have to go through a certain series of tests. And that has additional spot checks on top of what you yourself might be doing. So in that sense , it's a you know , this is this is a rapidly emerging field where the right set of guardrails are also being set up. Will we get everything picture perfect on day one ? It's hard to say. But at the same time , there's tons of work that's going on in this space. At the end of the day , a lot of us are quite responsible about what we try to bring to the table here with AI. And so our trust is on our own engineering capabilities and the academic capabilities that we have on putting the right set of guardrails in place.

S1: I've been speaking with Durga Malati. He is the head of AI for Qualcomm , along with professor of philosophy Darby Vickers , who teaches about ethics and AI at the University of San Diego. Thank you both so much for this discussion. Thanks.

S3: Thanks. Thanks for having us.

S1: Coming up , how advancements in artificial intelligence are helping to save lives and prevent wildfires.

S4: And so what's nice is the fact that it can look at the cameras all the time , 24 over seven. That really provides us with that security throughout the day , even through the night.

S1: You're listening to Kpbs Midday Edition. Welcome back to Kpbs midday Edition. I'm Jade Hindman. We continue our discussion on the subject of artificial intelligence by turning our attention to a new tool being used to find and stop the spread of wildfires. Few natural disasters have become as dangerous for the San Diego region as wildfires. Recent years have produced some of the largest and most destructive ever. And with our changing climate , the dangers of wildfires show no sign of slowing down. Alert California , though , is now using artificial intelligence to identify potential wildfires and its been receiving national recognition. The tool was recently voted one of the top inventions of 2023 by time magazine , a collaboration between UC San Diego and Cal Fire. The AI technology was released this September. It monitors and analyzes data from over 1000 video cameras placed across California , and here to tell us more about the tool and the impact it's having on wildfire spotting. I'm joined by the director of Alert California , Neil Driscoll. He is also a professor of geology with the Scripps Institution of Oceanography. Neil , welcome. Thanks for.

S5: Having me.

S1: Also joining us today is Suzanne Leininger , intelligence specialist with the San Diego unit of Cal Fire. Suzanne , welcome.

S4: Thank you for having me.

S1: Glad to have you both. So , Neil , congratulations on having alert California make the list of time magazine's top inventions.

S5: So machine learning is only as good as the data quality and data amount that you feed it. So we're constantly training it so it would be able to better discern smoke columns from the marine layer , or a dust devil or a farmer kicking up dust as he's plowing over his fields. So here the breakthrough came that we're using our own cameras. We're able to spin them every two minutes. We're able to take six frames and and the eye can say , something has changed in this frame. You should look at it. So what it does is it alerts people in in the SEC that something has changed. And this is a camera you should look at. So it removes noise. It allows focus. It reduces watch fatigue. But the main thing was we always talk about this public private partnership. Our industrial industry partner is Digital Path on this. And it was a great interaction of the two of us to get to where we are now , where we could start testing it and get independent feedback from Cal Fire. So I think the real strength of this platform is that we trained it on excellent quality data , 70 million images. The cameras were moved over 7000 times during this interval. And then we have the vetting by Cal Fire , the subject matter experts that can feed back to the AI and say , no , that isn't fire , or yes , that is fire. So it's really this partnership is really exciting , and I think that our others are recognizing the power of this approach and employing it also.

S1: And break that down for me. You've mentioned machine learning and training.

S5: And then the artificial intelligence is applying it to new data , looking at it , looking at all of the records it has , all of the information that's been fed to it. And it says , I believe that this is a fire cloud and it's a threat , an ignition. We're looking at smoke. And then the beauty of this system is that we can spin the cameras. We can look at that we can focus in. And our subject matter experts can say to the AI and it's binary. They can say , yes , that is smoke or no , it's not. And then what's really important is the camera system with the AI , with the anomalies. It takes everything together , and it allows for the dispatcher to scale the response up or down based on what they see. So I think more important than beating 911 calls. And we have a number of records that we have. Sometimes we've actually suppress the fire , and they're back in the firehouse with no. 911 calls. But it's not that that I think is the crucial step forward. It's the situational awareness , actionable real time data. So they're able to confirm that , yes , this is smoke and ignition , but what does it look like ? How fast is it spreading ? What is the color of the smoke ? Is it bent over ? So all of a sudden , all of that information that might have taken 20 minutes to an hour to get a battalion out there and eyes on the fire now can be done within seconds. Wow.

S1: Wow. And , Suzanne , you know , one of the groups involved with Alert California is Cal Fire.

S4: And for example , the Highland Fire. When that started , we could actually see that that was really low to the ground and that the wind was on it and it had a lot of wind and it was blowing it to the west. Today we had a fire that was much smaller and we could it got more of a straight up column on it , and we could tell that there wasn't a lot of wind on it , and it wasn't spreading very fast. So it can give us information and time is fire. So as soon as we can get that information , the better our response can be.

S1: And this works at night too , right ? 100%.

S4: And so it's nice about this is this is 24 over seven patrolling basically and every camera all the time. And that's not something that's really possible for humans to do or at least very easily and inexpensively. And so what's nice is the fact that it can look at the cameras all the time , 24 over seven. That really provides us with that security throughout the day , you know , even through the night.

S5: I'd like to just say that this system is the holy grail of interoperability. And the reason I say that is when we have fires up north and we move some of our people , our brave firefighters up there to augment the effort , they don't have to learn a new platform. They don't even have to learn where the cameras are or the camera names. So the eye it identifies that this is potential smoke , please check it out. But it also says based on my digital elevation model , these are the seven cameras that can actually see that smoke without obstruction of view. That is so important because when you move to a new area , you're not going to immediately be familiar with all the camera names. We have 1060 cameras , but the eye says , pick this one , pick this one. These are the seven that can potentially see the fire. And we can estimate along the line of the camera where the fire is. But with all of these cameras now we can triangulate. And the location of the fire is much better defined. So the eye is not just doing initial. This might be a smoke column. It's also providing the user the ease of verifying that with the cameras at hand.

S6: Can you both.

S1: Talk about the impact this technology has had in the field more ? I mean , has the time to find potential fires gone down as a result ? 100%.

S4: And what's what's nice about this is before. If the smoke was small and we usually we would always wait for the 911 call because we wouldn't get this kind of alert beforehand. And if if the smoke was small for the at that time , it could take a while to actually find that smoke and find the right camera to actually to locate that location. So the Cal Fire goal and mission is to keep all wildland fires below ten acres. So this is definitely in support of that. And it really can get us out there much faster.

S6: And Neil , this tool.

S1: Was rolled out this past September. But Alert California has been around for a lot longer than that.

S5: We started in 2000 with funding from the National Science Foundation , and HP ran was the platform that we were using then and for the Cedar Fire in 2003 , they were able to bring wireless connectivity out to the field. So firefighters in the SEC could share their knowledge and vision of what the fire was doing. And the Cedar Fire burnt over 270,000 acres. We just celebrated the 20th anniversary. I don't want to say celebrated. We acknowledged the hardship and and devastation , the loss of life that happened 20 years ago. But I'll tell you this , that the second generation , we was funded by our collaborators here , San Diego Gas and Electric , and they helped fund the the new pan tilt zoom cameras near infrared. And we got them in and we had the lilac fire. And Chief Meacham said it changed his response. There was a fire on the border that he had been sending battalions to. He diverted them to the Lilac Fire because he could see from our camera that's on Palomar that the the smoke was bent over. It was racing through areas like Rancho Montserrat and the downs there , the horses , the San Luis Downs. And so all of a sudden , right out of the box , 2017 , these cameras had impact. And now we've grown to an all hazard. So we work with Cal OS , Cal Fire to monitor areas of landslides , like the terrible Montecito slide that happened about five years ago , entombed 23 people overnight. So here we're a multi hazard platform. We have developed interoperability. We work with Cal Chiefs. We work with Western chiefs. We work with the utilities. The thing I'd like to emphasize we have developed the California Village and and Cal Fire has been instrumental in helping us move the needle. So thank you.

S1: You're listening to Kpbs Midday Edition. I'm Jade Hindman , I'm speaking with Neal Driscoll and Suzanne Leininger about the wildfire spotting technology alert California , which was recently named as one of the top inventions of 2023 by time magazine. Suzanne , to kind of pick up where we just left off , California has experienced devastating fires in recent years. I think more than half of the largest wildfires in the state occurred in just the last five years or so.

S4: And with climate change , it's it's continues to be an issue. We get up until this last year we had we've had a lot of rain this past year. But that's very atypical. I would not expect that to continue beyond this year. And I think that it's just always a risk and we should all be aware of it. And one of the things I have to say is in San Diego County , that people are I feel they're pretty hyper aware. And so any time that there is smoke , we do get 911 calls. So it's it's rather rare that we don't. But these cameras are here to help. And I can tell you that we just can't let our guard down and we can use any help that we can get.

S6: And , Neal , I.

S1: Think one common fear about the rise of AI is that people may lose their jobs as a result.

S5: Subject matter experts such as Cal Fire firefighters. So that line I think what it does is it shifts where we have operations and where we have resources , but there's so many other things that are going on in this extreme climate. And you just look at Libya or you look at some of these areas that get their whole rainfall budget in one storm. And so now these engender landslides , like the McKinney fire was hit in August with rainfall and sediment dispersal into the Klamath River caused anoxia and huge fish kill off. So here the AI and cameras are being used for all hazards. And the one hazard we haven't spoke about yet today , which is what I think is the largest hazard that California's facing is earthquakes and many of the faults. Tom Jordan from Southern California , you know , would tell you that. Many of these faults , they're open period is longer than their recurrence interval , which means that the probability of an earthquake gets higher. And these earthquakes are modeled like in the wired model by cal spawn hundreds of fires. So having networks that can have us triage and and use data to drive decisions , I don't think it's going to take away from firefighters. I think it's going to add all new firefighters , with some being more on the technological operational side , some being in the resource side and mapping vegetation before these events. How do we manage these fires ? The mosquito fire was a great example of marginal forest management. The tree die off and mortality in those regions was less than where it wasn't performed. We're learning so much , and I think it's a really bright day for how we're going to move forward and try to use data to drive decisions and make better decisions so that we , we can manage these fires without them getting to be mega fires. Yeah.

S6: Yeah.

S1: And , Suzanne , to sort of show how much of an impact this AI is having.

S4: And , you know , this tool for for us is also a way to keep the firefighters safer. And I can tell you , I spent some time out of the country on a fire that did not have this technology , and it was amazing how blind we were without a camera system and what they rely on. They really rely on the 911 calls. And it's to me , this is the fact that we can see this from the SEC. The duty chief can see what's going on and the resources that are heading out to the fire. They can get a really good idea what's going on before they even get here. Get there. And. The primary focus for that is the safety of the firefighters and the first responders that get out there. So this is a win in that situation , and I don't think anybody on the fire side feels threatened by this. I think they're they welcome this as a tool for their safety. Yeah.

S6: Yeah. And you know , as we've.

S1: Talked about alert California has more than a thousand cameras spread across California.

S5: I've worked closely with them and trying to set up a platform that will give them capabilities that we have here in California. The EU is interested , Spain is interested , Canada is interested. So here I think that the combination of Cal Fire would the University of California , San Diego made that quantum leap. We really moved. A much faster than we had thought. I never thought would be at this position so quickly as I started this process. And the founder of alert , California. And I'm proud that we've all listened to each other , been respectful , and we've learned to each other's skill sets so that we can communicate because the communication has to be there. And I think the point that Suzanne brought up is , is crucial. And I hadn't heard that story , Suzanne , that , you know , you were in a situation where the technology exists , but it wasn't employed. And I I've watched firefighters hold the line right here in Rancho Bernardo and the fire. A.

S7: A.

S5: Tough job. I don't know how they do it. They walk into fire.

S7: Yeah , yeah.

S6: Suzanne , anything you want to add to that.

S4: In this camera system gives us the ability to see what they're going towards. You know , most people run away from fire , firefighters run toward it. And if we can give them a better picture of what they're walking towards , it just really helps everybody involved.

S1: Well , we look forward to seeing this technology save lives. I've been speaking to Neil Driscoll , director of Alert California , along with Suzanne Leininger , intelligence specialist for the San Diego unit of Cal Fire. Thank you both for joining us today.

S4: Thank you so much. It's been a real pleasure.

S5: Thanks for having us.

S1: In what ways is artificial intelligence impacting your life ? Give us a call at (619) 452-0228. You can leave a message or you can email us at midday at We'd love to share your ideas and experiences here on the show. Don't forget to watch Evening Edition tonight at five for in-depth reporting on San Diego issues. We'll be back tomorrow at noon , and if you ever miss a show , you can find the Midday Edition podcast on all platforms. I'm Jade Hindman. Thanks for listening.

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston.
Michael Dwyer
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston.

Chat GPT, a generative AI tool, has revolutionized the tech industry, enabling human-like conversations and expanding the limits of AI. Largely developed by California-based companies, recent advances in AI offer new possibilities for life and work. However, concerns about AI's potential power persist as tech companies continue to expand its capabilities.

Plus, a new wildfire detection tool from UC San Diego and Cal Fire is using AI to prevent wildfires in California.