Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Donate today to give back in celebration of all that #PublicMediaGives. Your contribution will be matched $1 for $1.

Refresher Course: How should we govern AI?

Paul Carpenter describes AI software during an interview in New Orleans, Friday, Feb. 23, 2024. Carpenter says he was hired in January to use AI software to imitate President Joe Biden's voice to convince New Hampshire Democrat voters not to vote in the state's presidential primary.
Matthew Hinton/AP
/
FR170690 AP
As artificial intelligence tools become more commonplace, are U.S. laws catching up with the technology?

Every other Tuesday, the team behind Civics 101 joins NHPR’s All Things Considered host Julia Furukawa to talk about how our democratic institutions actually work.

Artificial intelligence tools are used almost everywhere, from social media to police precincts and courts. Civics 101’s host Hannah McCarthy and Julia talk about how AI is governed today and if current laws are enough to protect our rights and privacy.

You can listen to Civics 101 here, or wherever you get your podcasts.

Sign up for NHPR’s Rundown newsletter to stay informed


Transcript

When we talk about AI, when we talk about the algorithm, machine learning, what are we actually talking about?

We are talking about a whole swath of technology. We're going to call it AI for the purposes of this conversation, but machine learning is the term typically applied to the type of tech that city and state governments are using.

Most people get their day-to-day dose of AI via social media and using search engines like Google. An algorithm takes existing, available data about you and uses it to display content you might be interested in. It feeds you ads based on what you've searched for in the past. It is, very simply, a prediction tool. And that same type of technology is also being used by government offices to do a wide range of work for city, state and country populations.

Are AI tools currently governed by our laws? 

So at the federal level, President Joe Biden signed an executive order that is designed to make AI safe, secure and trustworthy. And that same administration proposed an AI Bill of Rights. Now the executive order is enforceable. The [AI] Bill of Rights is not enforceable. It just suggests how AI might be governed going forward.

And while we have had dozens of House and Senate committee hearings on AI, laws that regulate AI are really slow to develop at the federal level. So in lieu of that, there are several states that have passed legislation to protect data privacy, prevent bias, increase transparency, all as their governments adopt AI technologies.

But the issue is that these technologies are developing at an incredible pace. The minute something new hits the market, that's something that has to be questioned often through court cases. For example, there was a tool that was measuring basically the performance of teachers in a public school. One teacher who was assessed by this AI technology sued, essentially, and said, this is a violation of my rights. So it's really piecemeal.

So I would say to answer your question, are there aspects of AI that are not able to be governed by our current laws? Plenty of them, because our current laws are often not specific enough to actually govern that AI.

How should U.S. courts and lawmakers think about governing AI tools? Is it possible for them?

It's definitely possible. This is about protecting rights and protecting privacy. That is something that we do in the United States. We have always developed laws as technology develops.

But again, the U.S. is looking to its states. It's looking to other nations. And I want to say most recently, the EU passed the AI Act to regulate machine learning technology. And it does outright ban all kinds of exploitative and intrusive tech like deepfake stuff, categorizing people based on certain biometric data, and, I kid you not Julia, emotion-inferring AI in the workplace and education is also banned.

And then this same act defines high risk environments for AI tech and attempts to regulate how it's going to be used there. So that includes health care, law enforcement, criminal justice. So thinking about, as these technologies are being developed, what's going to govern them as they come out? It's sort of trying to get ahead of the curve there. And I expect other nations, especially the U.S., to watch the rollout of this act and see what kind of impact it's going to have and whether or not we are essentially going to copy it.

Michelle Liu is the All Things Considered producer at NHPR. She joined the station in 2022 after graduating from Northwestern University with a degree in journalism.
Julia Furukawa is the host of All Things Considered at NHPR. She joined the NHPR team in 2021 as a fellow producing ATC after working as a reporter and editor for The Paris News in Texas and a freelancer for KNKX Public Radio in Seattle.
Related Content

You make NHPR possible.

NHPR is nonprofit and independent. We rely on readers like you to support the local, national, and international coverage on this website. Your support makes this news available to everyone.

Give today. A monthly donation of $5 makes a real difference.