The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin
Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.
1d ago
Is the US really in an AI race with China—or are we racing toward completely different finish lines? In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate fact from fiction about China's AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From the streets of Shanghai to high-level policy discussions, Xu and Sheehan paint a nuanced portrait of AI in China that defies both hawkish fears and naive optimism. If we're going to avoid a catastrophic AI arms race, we first need to understand what race we're actually in—and whether we're even running toward the same finish line. Note: On December 8, after this recording took place, the Trump administration announced that the Commerce Department would allow American semiconductor companies, including Nvidia, to sell their most powerful chips to China in exchange for a 25 percent cut of the revenue. RECOMMENDED MEDIA “ China's Big AI Diffusion Plan is Here. Will it Work? ” by Matt Sheehan Selina’s blog Further reading on China’s AI+ Plan Further reading on the Gaither Report and the missile gap Further Reading on involution in China The consensus from the international dialogues on AI safety in Shanghai RECOMMENDED YUA EPISODES The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future AI Is Moving Fast. We Need Laws that Will Too. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Dec 4
No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability? Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI. Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market. RECOMMENDED MEDIA Co-Intelligence: Living and Working with AI by Ethan Mollick Further reading on Molly’s study with the Yale Budget Lab The “Canaries in the Coal Mine” Study from Stanford’s Digital Economy Lab Ethan’s substack One Useful Thing RECOMMENDED YUA EPISODES Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel We Have to Get It Right’: Gary Marcus On Untamed AI AI Is Moving Fast. We Need Laws that Will Too. Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins CORRECTIONS Ethan said that in 2022, experts believed there was a 2.5% chance that ChatGPT would be able to win the Math Olympiad. However, that was only among forecasters with more general knowledge (the exact number was 2.3%). Among domain expert forecasters, the odds were an 8.6% chance. Ethan claimed that over 50% of Americans say that they’re using AI at work. We weren’t able to independently verify this claim and most studies we found showed lower rates of reported use of AI with American workers. There are reports from other countries, notably Denmark, which show higher rates of AI use. Ethan indirectly quoted the Walmart CEO Doug McMillon as having a goal to “keep all 3 million employees and to figure out new ways to expand what they use.” In fact, McMillon’s language on AI has been much softer, saying that “AI is expected to create a number of jobs at Walmart, which will offset those that it replaces.” Additionally, Walmart has 2.1 million employees, not 3. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Nov 13
This week, we’re bringing you Tristan’s conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.” Tobias is a designer, writer, and technologist and the author of the book “ The Outrage Machine .” Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the current path we’re on AI and the choices we could make today to forge a different one. This interview clearly lays out the stakes of the AI race and helps to imagine a more humane AI future—one that is within reach, if we have the courage to make it a reality. If you enjoyed this conversation, be sure to check out and subscribe to “Into the Machine”: YouTube: Into the Machine Show Spotify: Into the Machine Apple Podcasts: Into the Machine Substack: Into the Machine You may have noticed on this podcast, we have been trying to focus a lot more on solutions. Our episode last week imagined what the world might look like if we had fixed social media and all the things that we could've done in order to make that possible. We'd really love to hear from you about these solutions and any other questions you're holding. So please, if you have more thoughts or questions, send us an email at undivided@humanetech.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Nov 6
We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path? In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them. This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality. Your Undivided Attention is produced by the Center for Humane Technology . Follow us on X: @HumaneTech_ . You can find a full transcript, key takeaways, and much more on our Substack . RECOMMENDED MEDIA Dopamine Nation by Anna Lembke The Anxious Generation by Jon Haidt More information on Donella Meadows Further reading on the Kids Online Safety Act Further reading on the lawsuit filed by state AGs against Meta RECOMMENDED YUA EPISODES Future-proofing Democracy In the Age of AI with Audrey Tang Jonathan Haidt On How to Solve the Teen Mental Health Crisis AI Is Moving Fast. We Need Laws that Will Too. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Oct 23
It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We’re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots—with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they've created make that outcome nearly impossible. It’s enough to make anyone’s head spin. In this year’s Ask Us Anything, we try to make sense of it all. You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn't already here, just hiding its capabilities? What does a good future with AI actually look like—and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week’s episode. Your Undivided Attention is produced by the Center for Humane Technology . Follow us on X: @HumaneTech_ . You can find a full transcript, key takeaways, and much more on our Substack . RECOMMENDED MEDIA The system card for Claude 4.5 Our statement in support of the AI LEAD Act The AI Dilemma Tristan’s TED talk on the narrow path to a good AI future RECOMMENDED YUA EPISODES The Man Who Predicted the Downfall of Thinking How OpenAI's ChatGPT Guided a Teen to His Death Mustafa Suleyman Says We Need to Contain AI. How Do We Do It? War is a Laboratory for AI with Paul Scharre No One is Immune to AI Harms with Dr. Joy Buolamwini “Rogue AI” Used to be a Science Fiction Trope. Not Anymore. Correction: When this episode was recorded, Meta had just released the Vibes app the previous week. Now it’s been out for about a month. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Sep 11
In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity rallied together to solve the problem. Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis. So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change. Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises. Your Undivided Attention is produced by the Center for Humane Technology . Follow us on X: @HumaneTech_ . You can find a full transcript, key takeaways, and much more on our Substack . RECOMMENDED MEDIA “Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan Solomon The full text of the Montreal Protocol The full text of the Kigali Amendment RECOMMENDED YUA EPISODES Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI AI Is Moving Fast. We Need Laws that Will Too. Big Food, Big Tech and Big AI with Michael Moss Corrections: Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198. Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Aug 26
Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.” Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost. CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all. If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance. Your Undivided Attention is produced by the Center for Humane Technology . Follow us on X: @HumaneTech_ . You can find a full transcript, key takeaways, and much more on our Substack . This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team. RECOMMENDED MEDIA The 988 Suicide and Crisis Lifeline Further reading on Adam’s story Further reading on AI psychosis Further reading on the backlash to GPT5 and the decision to bring back 4o OpenAI’s press release on sycophancy in 4o Further reading on OpenAI’s decision to eliminate the persuasion red line Kashmir Hill’s reporting on the woman with an AI boyfriend RECOMMENDED YUA EPISODES AI is the Next Free Speech Battleground People are Lonelier than Ever. Enter AI. Echo Chambers of One: Companion AI and the Future of Human Connection When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton CORRECTION : Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Aug 14
Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger. And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all. In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years. Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security. The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it? Your Undivided Attention is produced by the Center for Humane Technology . Follow us on X: @HumaneTech_ . You can find a full transcript, key takeaways, and much more on our Substack . RECOMMENDED MEDIA Gladstone AI’s State Department Action Plan, which discusses the loss of control risk with AI Apollo Research’s summary of AI scheming, showing evidence of it in all of the frontier models The system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo Research Anthropic’s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research’s work on alignment faking The Trump White House AI Action Plan Further reading on the phenomenon of more advanced AIs being better at deception. Further reading on Replit AI wiping a company’s coding database Further reading on the owl example that Jeremie gave Further reading on AI induced psychosis Dan Hendryck and Eric Schmidt’s “Superintelligence Strategy” RECOMMENDED YUA EPISODES Daniel Kokotajlo Forecasts the End of Human Dominance Behind the DeepSeek Hype, AI is Learning to Reason The Self-Preserving Machine: Why AI Learns to Deceive This Moment in AI: How We Got Here and Where We’re Going CORRECTIONS Tristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times . Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.