ETHICS IN AI: June 2023 podcast playlist
The rapidly changing landscape of artificial intelligence will lead to unknown outcomes for society. This is a time to be informed and engaged. Sofia, a member of our chapter in Valencia, Spain, has selected a few episodes that touch on the key ethical concerns when it comes to AI and how they might [will] impact us.
Podcast Playlist on ETHICS IN AI
Get the full playlist on your podcast player of choice using these platforms:
This Month’s Podcast Playlist | Running List of PBC Podcast Playlists |
Listen Notes | Podchaser | Podyssey | Spotify | Listen Notes | Podchaser | Podyssey | Spotify |
Your Undivided Attention: “The AI Dilemma” (March 2023, 42 min)
You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don’t yet understand its capabilities – yet it has already been deployed to the public.
At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you’re about to hear is the culmination of that work, which is ongoing.
AI may help us achieve major advances like curing cancer or addressing climate change. But the point we’re making is: if our dystopia is bad enough, it won’t matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.
Artificial Intelligence and You: “086 – Guest: Stuart Russell, AI professor, author, activist, part 1” (February 2022, 32 min)
Stuart Russell, professor of AI at UC Berkeley, author of both the standard textbook on AI and the 2019 book Human Compatible: Artificial Intelligence and the Problem of Control is the guest on this episode. You may know him as the BBC’s 2021 Reith Lecturer on artificial intelligence. Queen Elizabeth knows him as a 2021 recipient of the Order of the British Empire. Stuart is a prominent voice in the public side of the AI risk conversation.
In Machines We Trust: “When AI watches the streets” (April 2023, 26 min)
The term ‘smart city’ paints a picture of a tech-enabled oasis—powered by sensors of all kinds. But we’re starting to recognize what all these tools might mean for privacy. In this episode, we meet a researcher studying how this is being applied in Iran and visit one of the nation’s top smart cities, to learn how its efforts there have evolved over time.
The Radical AI Podcast: “More than a Glitch, Technochauvanism, and Algorithmic Accountability with Meredith Broussard” (March 2023, 1 hr 4 min)
In this episode, Meredith Broussard discusses her influential new book, “More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech” – published by MIT Press.
Your Undivided Attention: “Synthetic Humanity: AI & What’s At Stake” (February 2023, 46 min)
It may seem like the rise of artificial intelligence, and increasingly powerful large language models you may have heard of, is moving really fast… and it IS. But what’s coming next is when we enter synthetic relationships with AI that could come to feel just as real and important as our human relationships… And perhaps even more so.
In this episode of Your Undivided Attention, Tristan and Aza reach beyond the moment to talk about this powerful new AI, and the new paradigm of humanity and computation we’re about to enter. This is a structural revolution that affects way more than text, art, or even Google search. There are huge benefits to humanity, and we’ll discuss some of those. But we also see that as companies race to develop the best synthetic relationships, we are setting ourselves up for a new generation of harms made exponentially worse by AI’s power to predict, mimic and persuade.
It’s obvious we need ways to steward these tools ethically. So Tristan and Aza also share their ideas for creating a framework for AIs that will help humans become MORE humane, not less.
Bonus podcast episodes:
- The Radical AI Podcast: “Data Privacy and Women’s Rights with Rebecca Finlay” (September 2022, 45 min)
What is the reality of data privacy after the overruling of Roe v. Wade?In this episode, we interview Rebecca Finlay about protecting user data privacy and human rights, following the US Supreme Court ruling of Dobbs v. Jackson Women’s Health Organization.Rebecca Finlay is the CEO of the non-profit, Partnership on AI overseeing the organization’s mission and strategy. In this role, Rebecca ensures that the Partnership on AI and their global community of Partners work together so that developments in AI advance positive outcomes for people and society. - Data Science Ethics Podcast: “Encode Equity” (March 2022, 36 min)
Organizations have flocked to data science as a means of achieving unbiased results in decision-making on the premise that “the data doesn’t lie.” Yet, as data is reflective of the biases in our culture, in our history, and in our perspectives, it is particularly naïve to assume that models will somehow smooth everything out and provide equitable results. The truth is that it falls on the shoulders of everyone working on and with data to question whether it is likely to produce the intended, more equitable outcomes or if, instead, it may propagate a pattern of injustice that is endemic to the data itself based on representations of the past.Today, we talk with Renee Cummings, Data Activist in Residence at the University of Virginia and Founder of Urban AI, on the need to encode equity into data science and artificial intelligence. - The Daily: “The Godfather of A.I. Has Some Regrets” (May 2023, 39 min)
As the world begins to experiment with the power of artificial intelligence, a debate has begun about how to contain its risks. One of the sharpest and most urgent warnings has come from a man who helped invent the technology.Cade Metz, a technology correspondent for The New York Times, speaks to Geoffrey Hinton, who many consider to be the godfather of A.I.
Conversation Starter Questions:
- How do you feel about technology developments?
- What do you identify as main issue with AI developments?
- What do you think AI efforts should be focused on?
- Who should be taking control of the risks presented by AI rapid developments?
- What do you think about ChatGPT?
- How can we regulate digital spaces?