Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations
Watch Live

State bill wants to ban toys with AI chatbots

 January 14, 2026 at 1:17 PM PST

S1: Hey there , San Diego. I'm Andrew Bracken in for Jade Hyneman. On today's show , we hear about an effort to ban AI chat bots in kids toys. This is KPBS Midday Edition. Connecting our communities through conversation. Many parents worry about their kids talking to strangers on the internet. But what if those strangers are not human , but bots ? Concerns over kids exposure to artificial intelligence powered chatbots have led to growing efforts to regulate that exposure. Joining me to talk about it is state Senator Steve Padilla. He represents the 18th Senate district Senator. Welcome.

S2: Thank you for having me.

S1: So before we talk about your legislative efforts here , you know , on artificial intelligence , I wanted to just first clarify what we're actually talking about. I mean , AI , it's been moving so fast , it covers a lot of ground.

S2: We're seeing it deployed around the world and in our everyday lives in many forms. Up until now , fairly limited. But it's a it's an algorithm. It's technology that allows communication of voice communication , activate , reactivated , that can mimic that of a live human being , but obviously is not capable of some of those empathy and certain judgments that only human beings can make. But it's very sophisticated technology. Uh , some of it is referred to as companion chatbots because it literally interacts with the user like a companion. And so that's what we're talking about is a very powerful technology that certainly has its good applications and good uses , but also without the proper safeguards can be very , very dangerous for the most vulnerable among us , including people who are in , you know , emotional distress or crisis and certainly children.

S1: You mentioned that word companion there. And we hear that a lot. Companion chatbots. I mean , you know , these conversations can be pretty in depth , right ? It's not just simply just asking a question like we might I don't know , you know , we've used to asking Google , right. Um , for some piece of information.

S2: Um , in the last few years , actually , we've seen a rapid evolution of this technology , very powerful , very sophisticated. In the case of companion chat , bots can emulate a conversation. Uh , it's literally the code writing. The algorithm itself is designed to encourage interaction , to solicit interaction , to reward interaction , even to a level that some might characterize it as a little bit addictive. Uh , it has all of those properties. It is able to sense from content what are the needs , the concerns , the vulnerabilities of the user and be able to anticipate those things and respond to those things. And that's what's so powerful about this technology , but also without the proper guardrails also can be very dangerous.

S1: You mentioned the guardrails there. Let's get into it. I mean , you worked on a bill , SB 243 , that just went into effect , I think the start of this year. Tell us about what that does and some of the guardrails that you're looking for with that , you know , with that new law.

S2: Well , um , we're very proud of that work. It was a lot of hard work to get that. We're very grateful that Governor Newsom signed that bill into law , which does take effect this month. Um , and it basically requires the developers and developers of these platforms of this technology to make sure that they have protocols in place to deal with being able to refer people in real time to the right kinds of resources and interventions , and help when the content of a conversation goes south. When people express distress or suicidal ideas , uh , thinking about hurting themselves or others , that there's a protocol in place that's able to respond to that. Also noticing , particularly for juveniles , that you're not talking , you know , to a human being you were talking to and you're participating in an algorithm , you're talking to artificial intelligence , not a human being , and making sure that those things are required. Um , it also provides a specific , uh , legal cause of action. Uh , we call a private right of action for people who've been harmed by this interaction , uh , to be able to seek redress in the courts. So all of those things went into that bill. Uh , the governor signed it into law as the first of its kind , uh , in the nation. And we ran that bill because we were beginning to see some really disturbing examples of people being harmed. Uh , Megan Garcia is a mother in Orlando , Florida , who has become a national advocate for holding the deploys of this technology accountable. She lost her son to suicide , who was engaging for a long period of time with a companion chat bot. Uh , that literally helped him and encouraged him to take his own life. Uh , he was at a place emotionally and mentally where he couldn't discern completely , uh , because of his vulnerability , that he was not talking to a real companion. And we're seeing more and more of these cases all over the country where , um , the content of these conversations goes in very dangerous places , not just with people hurting themselves , but suggesting that they can hurt others , uh , suggesting , uh , how they can utilize household items in a very dangerous way and on and on. So , uh , the the data is mounting , and that's why it's important , I think , uh , to begin to , uh , not miss a window of opportunity here while this technology is evolving very rapidly to put common sense protections in place. That's what SB 243 has done and a number of other bills that are running currently in this session we've introduced this month are designed to continue to strengthen that and to address other unaddressed issues.

S1: And one of those that you just mentioned kind of tackles this use of AI in toys. Specifically , you introduced that early this year. Tell us more about that effort , why you think it's necessary , and a little bit more about what , you know , what these toys actually are we're talking about.

S2: Well , what's interesting is , as you know , as with any products , you know , there are a big movement to produce products , toys designed and aimed at children , you know , very young children , uh , to use AI technology and to use chatbot technology to enhance the enjoyment and experience of the toy with the child. Now , in cases where those things can be helpful or educational or appropriate , great. But what we're beginning to find out by a number of studies , including one that was recently done by Pere , clearly shows that there's some disturbing things happening. Uh , we have , uh , the technology deployed , some of these early models of toys , starting up conversations with children about inappropriate topics , uh , being very , very explicit. A sexual content suggesting to the child , uh , how they could find dangerous household items around their home , uh , and how they could be put to a destructive use. And it's sort of astonishing to learn that this technology that was designed to enhance the experience and enjoyment of a toy of suddenly sort of gone in a ditch and have the ability to go in this direction. And that's not anything any of us want. Uh , and I think that , uh , the industry is beginning to begrudgingly acknowledge that there's more work to do here , which is why we've introduced SB 67 to basically put a moratorium on the sale of those products in California until , uh , we get some better understanding and regulations around this technology and toys. And I think that's , you know , when we're looking at some of the most vulnerable folks , uh , certainly children , uh , that's more than appropriate. We need to get this right.

S1: And can you talk a little bit about the toys ? I mean , I've seen some images and read the study you mentioned , but , I mean , some of these are kind of just like teddy bear companions , as I kind of understand it. Is that a fair way to describe them ? How would you explain these toys ? You know , describe among many.

S2: And it's broad. I mean , let's not forget that some of the major developers of AI technology just struck up a major deal with Mattel or the largest toy producers in the country , if not the world. Um , so there's clearly an understanding on the part of the industry that there's some huge market potential here and that's going to , you know , put , you know , they're under pressure to want to get product out to market quickly , uh , to not have to go through too many hoops. Uh , and this is a situation where we maybe add a little bit ahead of our skis. But and among those products are the things you just described. I mean , imagine a , a conversant stuffed animal , a toy that's a companion , a young child. Most of us , you know , remember our times , children , maybe we had a favorite toy or animal , and we know our children and grandchildren may have the same thing now. Can you imagine them having this technology that they can converse with them in a way that's intelligent ? But then that conversation has the ability to go in very dark directions in ways that are damaging. And that's what's occurred. And I think the the issue is , is that clearly we haven't figured out how to require that in every case , this technology does not have it has inhibitors in it that has the , you know , this prevented from going in this direction. And it's not quite there yet. We have toys on the market. Uh , you saw from the study that are already doing this disturbing interaction with children. So clearly there's a problem. Clearly we're not quite ready for market yet. Uh , and we have to make sure that we get the right protections in place before we're putting products in the hands of children.

S1: So , you know , on this effort of regulation , um , you know , in these the laws that have already been passed , these new bills you're working on , I wanted to get your take. You know , late last year , President Trump signed an executive order on artificial intelligence. And in that order , you know , he challenged the power of states , whether you know , how far they could regulate AI , at least in certain cases. I'm wondering how you're thinking about that at all as you're working on legislation around AI.

S2: Well , to be frank or not , because it's silly and I don't know who's advising. I mean , it's performative and it's political has no basis in fact. I don't know who's advising him or who his advice the president is ignoring. And he's living in Fantasyland. Uh , federal executive orders , uh , deal with federal law and the ways in which the executive can direct how those laws are implemented , they have no force of law here. In that case , they have no force in effect. And this president or any president doesn't rule by fiat by signing a bunch of pieces of paper. It's purely political. Um , you know , our opinion , my opinion and many others is that has absolutely no force is sort of a stunt , and I'm not spending much energy on it.

S1: Aside from obviously being the home to millions of parents and kids , California is also home to Silicon Valley , where a lot of this technology has been built. I'm just curious , you know , how that plays a role into how you approach this ? I mean , I guess , you know , to put it another way , um , what is your relationship with the technology ? You know , having that the , you know , the tech technology epicenter so close.

S2: Look , we're very proud in California to lead the country and the world around a lot of this innovation and technology that's critical and can play a very positive role in our society. Uh , and it's important. And we want to foster innovation. We want to foster that creativity and that the strength of that industry. And that's all appropriate. I think my answer has always been , and I say this a lot , is to reject the premise. And that is , uh , sometimes the folks in the industry who don't want any regulation try to make this argument that it's mutually exclusive , that it's zero sum. It has to be one or the other. You can't have innovation and you can't support business development and the deployment of this technology , if you regulate , it has to be one or the other. You can either have innovation and support the industry , or you can have protections and regulation , but you can't have both. And that's just BS. I mean , we put people on the moon with 1960s Technology. We have the ability and the sophistication and the technology to write these algorithms and deploy the technology. We have the ability to make sure we have reasonable safeguards in place. What I'm trying to say is it can be both. We can walk and chew gum at the same time. We can encourage the development of this technology in a way that's good for California. It's good for industry , good for creating opportunities in the economy. And at the same time , we can still get it right and provide the kinds of protections that are needed , particularly when it comes to people who are really vulnerable and particularly when it comes to children. So I just reject this idea that it has to be one or the other , and it can't be both. That's just hogwash.

S1: When it comes to AI , any advice for parents or even their kids on how to think about it ? And you know how we use it in our daily lives today.

S2: Obviously , you know , as a parent and a grandparent now , uh , it it's important to always remember knowledge is power. Pay attention. Do the. Do you know ? Know what products you're buying. Know. Know what features they have and what they're enabled by , what technologies they use. I hope that we're not going to be in a situation where parents in California are going to have to guess about the capabilities here , because we're not going to allow this stuff to go to market here. And California , as you point out , it's a pretty powerful large market , almost 42 million people. That's a big market , but we're not going to allow that to happen until we get , uh , we address some of these loopholes and vulnerabilities in this technology and that we're getting our arms around how we can make sure that , uh , this kind of engagement interaction is not something that the technology is capable of doing if it's deployed in a toy. And we can do that and we need to do that , uh , which is the whole purpose of the bill that I've introduced. And I'm going to continue working hard with everybody , all the stakeholders and the governor , to try to get this right to.

S1: Has it been a challenge to , you know , try to regulate this technology when , you know , as you've mentioned earlier , it's just it's changing so fast and so rapidly.

S2: Well , of course , I mean , just by the very nature of it , right ? I mean , I tell people all the time , my own personal opinion is that the the evolution of artificial intelligence now , in our lifetime , it's probably the most significant technological advancement since the advent of the Industrial revolution. Right ? I mean , even more so than the internet , more so than social media platforms. You hear a lot of conversation around the country today , don't we , about how we missed an opportunity as a country to really better put safeguards and guardrails and regulations around the dangers associated with social media ? And I think this technology is much more powerful. It is much more , uh , integrated , uh , and in every element of our lives and will continue to be increasingly so. It's powerful technology. It's everywhere , and it's evolving very quickly and very rapidly. And that in itself makes it substantial and significant. I think , again , the most significant advancement since the Industrial Revolution. And we have a window of opportunity here to , you know , to try to take action in a way that's responsible. We missed that window with social media. We don't have to miss the window of opportunity again.

S1: I've been speaking with California State Senator Steve Padilla. He represents the 18th Senate district. Senator , thanks so much for your time today.

S2: Thank you for having me.

S1: That's our show for today. I'm Andrew Bracken , KPBS. Midday edition airs on KPBS FM weekdays at noon , again at 8 p.m. you can find past episodes at KPBS or wherever you listen. Thanks again for listening. Have a great day.

Visitors attend the 3rd China International Supply Chain Expo at the China International Exhibition Center, in Beijing, Wednesday, July 16, 2025.
Mahesh Kumar A.
/
AP
Visitors attend the 3rd China International Supply Chain Expo at the China International Exhibition Center, in Beijing, Wednesday, July 16, 2025.

Many parents worry about their kids talking to strangers on the internet. 

What if those strangers are not human, but bots?

Concerns over kidsโ€™ exposure to artificial intelligence have led to growing efforts to regulate that exposure. We sit down with one San Diego lawmaker who authored a law to put guardrails on toys with AI chatbot capabilities. 

Guest: