4d ago
Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown's McCourt School of Public Policy, and Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, spoke with Jacob Mchangama, research professor of political science at Vanderbilt University and founder of The Future of Free Speech, and Jacob Shapiro, the John Foster Dulles Professor of International Affairs at Princeton University. The conversation covered the findings of a new report examining how AI models handle contested speech; comparative free speech regulations across six jurisdictions; empirical testing of how major chatbots respond to politically sensitive prompts; and the tension between free expression principles and concerns about manipulation in AI systems. Hosted on Acast. See acast.com/privacy for more information.
Dec 12
In this rapid response episode, Lawfare senior editors Alan Rozenshtein and Kevin Frazier and Lawfare Tarbell fellow Jakub Kraus discuss President Trump's new executive order on federal preemption of state AI laws, the politics of AI regulation and the split between Silicon Valley Republicans and MAGA populists, and the administration's decision to allow Nvidia to export H200 chips to China. Mentioned in this episode: Executive Order: Ensuring a National Policy Framework for Artificial Intelligence Charlie Bullock, "Legal Issues Raised by the Proposed Executive Order on AI Preemption," Institute for Law & AI Hosted on Acast. See acast.com/privacy for more information.
Dec 9
Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs , which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI. Follow Graham: @GDufault and ACT | The App Association: @actonline Hosted on Acast. See acast.com/privacy for more information.
Dec 2
Caleb Withers , a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode. Check out Caleb’s recent research here . Hosted on Acast. See acast.com/privacy for more information.
Nov 25
Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective. Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They’re creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals? That’s the core question of the episode. Giddy up for a great chat! Learn more about the guests and their companies here: Andrew's Linkedin , Vesta's Linkedin Thomas’s LinkedIn , August’s LinkedIn Hosted on Acast. See acast.com/privacy for more information.
Nov 18
Jeff Bleich, General Counsel at Anthropic, former Chief Legal Officer at Cruise, and former Ambassador to Australia during the Obama administration, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to get a sense of how the practice of law looks at the edge of the AI frontier. The two also review how Jeff’s prior work in the autonomous vehicle space prepared him for the challenges and opportunities posed by navigating legal uncertainties in AI governance. Hosted on Acast. See acast.com/privacy for more information.
Nov 11
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity. These three are prolific researchers. Give them a follow to find their latest works. Anton: @akorinek on X Nathan: @ngoldschlag and @InnovateEconomy on X Bharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar Hosted on Acast. See acast.com/privacy for more information.
Nov 4
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.” Hosted on Acast. See acast.com/privacy for more information.
Oct 28
Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy. The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades. Mentioned in this episode: "The Era of A.I. Propaganda Has Arrived, and America Must Act" by Brett J. Goldstein and Brett V. Benson (New York Times, August 5, 2025) "China Turns to A.I. in Information Warfare" by Julian E. Barnes (New York Times, August 6, 2025) "The GoLaxy Papers: Inside China's AI Persona Army" by Dina Temple-Raston and Erika Gajda (The Record, September 19, 2025) "The supply of disinformation will soon be infinite" by Renée DiResta (The Atlantic, September 2020) Hosted on Acast. See acast.com/privacy for more information.
Oct 21
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI. The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.
Oct 14
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf: https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan: https://arxiv.org/abs/2310.03003 ’ https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.
Oct 7
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You’ll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David: https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/ https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi: https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-design https://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Read more from Kevin: https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.
Sep 30
In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53 , the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.
Sep 30
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia. Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-tower Learn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.
Sep 23
Alan Rozenshtein, Renee DiResta, and Jess Miers discuss the distinct risks that generative AI systems pose to children, particularly in relation to mental health. They explore the balance between the benefits and harms of AI, emphasizing the importance of media literacy and parental guidance. Recent developments in AI safety measures and ongoing legal implications are also examined, highlighting the evolving landscape of AI regulation and liability. Hosted on Acast. See acast.com/privacy for more information.
Sep 16
On today's Scaling Laws episode, Alan Rozenshtein sat down with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, School of Law, to discuss the rapidly evolving legal landscape at the intersection of generative AI and copyright law. They dove into the recent district court rulings in lawsuits brought by authors against AI companies, including Bartz v. Anthropic and Kadrey v. Meta . They explored how different courts are treating the core questions of whether training AI models on copyrighted data is a transformative fair use and whether AI outputs create a “market dilution” effect that harms creators. They also touched on other key cases to watch and the role of the U.S. Copyright Office in shaping the debate. Mentioned in this episode: "How to Think About Remedies in the Generative AI Copyright Cases" by Pam Samuelson in Lawfare Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith Bartz v. Anthropic Kadrey v. Meta Platforms Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc. U.S. Copyright Office, Copyright and Artificial Intelligence, Part 3: Generative AI Training Hosted on Acast. See acast.com/privacy for more information.
Sep 11
Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education. Select works by Gans include: A Quest for AI Knowledge ( https://www.nber.org/papers/w33566 ) Regulating the Direction of Innovation ( https://www.nber.org/papers/w32741 ) How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption ( https://www.nber.org/papers/w32105 ) Hosted on Acast. See acast.com/privacy for more information.
Sep 9
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms. You can read Steven’s Substack here: https://stevenadler.substack.com/ Thanks to Leo Wu for research assistance! Hosted on Acast. See acast.com/privacy for more information.
Sep 2
Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU’s use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US’s increasingly interventionist industrial policy with respect to key sectors, especially tech. Read more: Anu’s op-ed in The New York Times The Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van Reenen Draghi Report on the Future of European Competitiveness Hosted on Acast. See acast.com/privacy for more information.
Aug 28
Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House’s announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.” Hosted on Acast. See acast.com/privacy for more information.
Aug 26
MacKenzie Price, co-founder of Alpha School, and Rebecca Winthrop, a senior fellow and director of the Center for Universal Education at the Brookings Institution, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to review how AI is being integrated into the classroom at home and abroad. MacKenzie walks through the use of predictive AI in Alpha School classrooms. Rebecca provides a high-level summary of ongoing efforts around the globe to bring AI into the education pipeline. This conversation is particularly timely in the wake of the AI Action Plan, which built on the Trump administration’s prior calls for greater use of AI from K to 12 and beyond. Learn more about Alpha School here: https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html and here: https://www.astralcodexten.com/p/your-review-alpha-school Learn about the Brookings Global Task Force on AI in Education here: https://www.brookings.edu/projects/brookings-global-task-force-on-ai-in-education/ Hosted on Acast. See acast.com/privacy for more information.
Aug 21
Keegan McBride , Senior Policy Advisor in Emerging Technology and Geopolitics at the Tony Blair Institute, and Nathan Lambert , a post-training lead at the Allen Institute for AI, join Alan Rozenshein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore the current state of open source AI model development and associated policy questions. The pivot to open source has been swift following initial concerns that the security risks posed by such models outweighed their benefits. What this transition means for the US AI ecosystem and the global AI competition is a topic worthy of analysis by these two experts. Hosted on Acast. See acast.com/privacy for more information.
Aug 19
Alan Rozenshtein, research director at Lawfare, sat down with Sam Winter-Levy , a fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace; Janet Egan , a senior fellow with the Technology and National Security Program at the Center for a New American Security; and Peter Harrell , a nonresident fellow at Carnegie and a former senior director for international economics at the White House National Security Council under President Joe Biden. They discussed the Trump administration’s recent decision to allow U.S. companies Nvidia and AMD to export a range of advanced AI semiconductors to China in exchange for a 15% payment to the U.S. government. They talked about the history of the export control regime targeting China’s access to AI chips, the strategic risks of allowing China to acquire powerful chips like the Nvidia H20, and the potential harm to the international coalition that has worked to restrict China’s access to this technology. They also debated the statutory and constitutional legality of the deal, which appears to function as an export tax, a practice explicitly prohibited by the Constitution. Mentioned in this episode: The Financial Times article breaking the news about the Nvidia deal The Trump Administration’s AI Action Plan Hosted on Acast. See acast.com/privacy for more information.
Aug 14
Join us on Scaling Laws as we delve into the intricate world of AI policy with Dean Ball, former senior policy advisor at the White House's Office of Science and Technology Policy. Discover the behind-the-scenes insights into the Trump administration's AI Action Plan, the challenges of implementing AI policy at the federal level, and the evolving political landscape surrounding AI on the right. Dean shares his unique perspective on the opportunities and hurdles in shaping AI's future, offering a candid look at the intersection of technology, policy, and politics. Tune in for a thought-provoking discussion that explores the strategic steps America can take to lead in the AI era. Hosted on Acast. See acast.com/privacy for more information.
Aug 12
In this episode, we talk about the intricate world of AI liability through the lens of agency law. Join us as Anat Lior explores the compelling case for using agency law to address the legal challenges posed by AI agents. Discover how analogies, such as principal-agent relationships, can help navigate the complexities of AI liability, and why it's crucial to ensure that someone is held accountable when AI systems cause harm. Tune in for a thought-provoking discussion on the future of AI governance and the evolving landscape of legal responsibility. Hosted on Acast. See acast.com/privacy for more information.
Aug 7
Brian Fuller, product policy leader at OpenAI joins Kevin on the challenges of designing policies that ensure AI technologies are safe, aligned, and socially beneficial. From the fast-paced landscape of AI development to the balancing of innovation with ethical responsibility. Tune in to gain insights into the frameworks that guide AI's integration into society and the critical questions that shape its future. Hosted on Acast. See acast.com/privacy for more information.
Aug 5
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown join Alan Rozenshtein and Kevin Frazier, to take a look at the Trump Administration’s Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan. This episode unpacks the implications of prohibiting AI models that fail to pursue objective truth and espouse "DEI" values. Hosted on Acast. See acast.com/privacy for more information.
Jul 31
In this episode of Scaling Laws, Kevin Frazier is joined by Sayash Kapoor, co-author of "AI Snake Oil," to explore the complexities of AI development and its societal implications. They delve into the skepticism surrounding AGI claims, the real bottlenecks in AI adoption, and the transformative potential of AI as a general-purpose technology. Kapoor shares insights on the challenges of integrating AI into various sectors, the importance of empirical research, and the evolving nature of work in the AI era. The conversation also touches on the role of policy in shaping AI's future and the need for a nuanced understanding of AI's capabilities and limitations. Hosted on Acast. See acast.com/privacy for more information.
Jul 30
In this episode, join Kevin Frazier as he delves into the complex world of AI regulation with experts Lauren Wagner of the Abundance Institute and Andrew Freedman, Chief Strategy Officer at Fathom. As the AI community eagerly awaits the federal government's AI action plan, our guests explore the current regulatory landscape and the challenges of implementing effective governance with bills like SB 813. Innovative approaches are being proposed, including the role of independent verification organizations and the potential for public-private partnerships. Be sure to check out Fathom's Substack here: https://fathomai.substack.com/subscribe?params=%5Bobject%20Object%5D Hosted on Acast. See acast.com/privacy for more information.
Jul 24
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws. This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan’s extensive recommendations and explore what may come next. Hosted on Acast. See acast.com/privacy for more information.
Jul 22
Lt. Gen. (ret) Jack Shanahan joins Kevin Frazier to explore the nuanced landscape of AI in national security. Challenging the prevalent "AI arms race" narrative. The discussion delves into the complexities of AI integration in defense, the cultural shifts required within the Department of Defense, and the critical role of public trust and shared national vision. Tune in to understand how AI is reshaping military strategies and the broader implications for society. Hosted on Acast. See acast.com/privacy for more information.
Jul 17
Kevin Frazier brings Eugene Volokh, a senior fellow at the Hoover Institution and UCLA law professor, to explore the complexities of libel in the age of AI. Discover how AI-generated content challenges traditional legal frameworks and the implications for platforms under Section 230. This episode is a must-listen for anyone interested in the evolving landscape of AI and law. Hosted on Acast. See acast.com/privacy for more information.
Jul 15
Cass Madison, the Executive Director of the Center for Civic Futures, and Zach Boyd, Director of the AI Policy Office at the State of Utah, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare , to discuss how state governments are adjusting to the Age of AI. This conversation explores Cass's work to organize the increasing number of state officials tasked with thinking about AI adoption and regulation as well as Zach's experience leading one of the most innovative state AI offices. Hosted on Acast. See acast.com/privacy for more information.
Jul 10
In this episode of Scaling Laws, Alan and Kevin discuss the current state of AI growth, focusing on scaling laws, the future of AGI, and the challenges of AI integration into society with Ethan Mollick, Professor of Management at Wharton, specializing in entrepreneurship and innovation. They explore the bottlenecks in AI adoption, particularly the role of interfaces and the uncertainty surrounding AI development. Mollick discusses the transformative potential of AI in various fields, particularly education and medicine, as well as the need for empirical research to understand AI's impact, the importance of adapting teaching methods, and the challenges of cognitive de-skilling. More of Ethan Mollick's work: https://www.oneusefulthing.org/ Hosted on Acast. See acast.com/privacy for more information.
Jul 2
On the inaugural episode of Scaling Laws, co-hosts Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Law, and Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, speak with Adam Thierer, a senior fellow for the Technology and Innovation team at the R Street Institute, and Helen Toner, the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). They discuss the recent overwhelming defeat in the Senate of a proposed moratorium on state and local regulation of artificial intelligence. The conversation explores the moratorium's journey from its inclusion in a House bill to its ultimate failure, examining the procedural hurdles, the confusing legislative language, and the political maneuvering that led to its demise by a 99-to-1 vote. The group discuss the future of U.S. AI governance, covering the Republican party's fragmentation on tech policy and whether Congress's failure to act is a sign of it being broken or a deliberate policy choice. Mentioned in this episode: “The Continuing Tech Policy Realignment on the Right” by Adam Thierer in Medium “1,000 AI Bills: Time for Congress to Get Serious About Preemption” by Kevin Frazier and Adam Thierer in Lawfare “Congress Should Preempt State AI Safety Legislation” by Dean W. Ball and Alan Z. Rozenshtein in Lawfare "The Coming Techlash Could Kill AI Innovation Before It Helps Anyone" by Kevin Frazier in Reason "Unresolved debates about the future of AI" by Helen Toner in Rising Tide Hosted on Acast. See acast.com/privacy for more information.
Jun 23
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Alan and Kevin dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.
Mar 22, 2024
Last week the House of Representatives overwhelmingly passed a bill that would require ByteDance, the Chinese company that owns the popular social media app TikTok, to divest its ownership in the platform or face TikTok being banned in the United States. Although prospects for the bill in the Senate remain uncertain, President Biden has said he will sign the bill if it comes to his desk, and this is the most serious attempt yet to ban the controversial social media app. Today's podcast is the latest in a series of conversations we've had about TikTok. Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, led a conversation with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare , and Ramya Krishnan, a Senior Staff Attorney at the Knight First Amendment Institute at Columbia University. They talked about the First Amendment implications of a TikTok ban, whether it's a good idea as a policy matter, and how we should think about foreign ownership of platforms more generally. Disclaimer: Matt's center receives funding from foundations and tech companies, including funding from TikTok. Hosted on Acast. See acast.com/privacy for more information.
Mar 21, 2024
Today, we’re bringing you an episode of Arbiters of Truth , our series on the information ecosystem. On March 18, the Supreme Court heard oral arguments in Murthy v. Missouri , concerning the potential First Amendment implications of government outreach to social media platforms—what’s sometimes known as jawboning. The case arrived at the Supreme Court with a somewhat shaky evidentiary record, but the legal questions raised by government requests or demands to remove online content are real. To make sense of it all, Lawfare Senior Editor Quinta Jurecic and Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill, called up Alex Abdo, the Litigation Director of the Knight First Amendment Institute at Columbia University. While the law is unsettled, the Supreme Court seemed skeptical of the plaintiffs’ claims of government censorship. But what is the best way to determine what contacts and government requests are and aren't permissible? If you’re interested in more, you can read the Knight Institute’s amicus brief in Murthy here and Knight’s series on jawboning—including Perault’s reflections — here . Hosted on Acast. See acast.com/privacy for more information.
Jan 3, 2024
In May 2023, Montana passed a new law that would ban the use of TikTok within the state starting on January 1, 2024. But as of today, TikTok is still legal in the state of Montana—thanks to a preliminary injunction issued by a federal district judge, who found that the Montana law likely violated the First Amendment. In Texas, meanwhile, another federal judge recently upheld a more limited ban against the use of TikTok on state-owned devices. What should we make of these rulings, and how should we understand the legal status of efforts to ban TikTok? We’ve discussed the question of TikTok bans and the First Amendment before on the Lawfare Podcast , when Lawfare Senior Editor Alan Rozenshtein and Matt Perault, Director of the Center on Technology Policy at UNC-Chapel Hill, sat down with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papandrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law. In light of the Montana and Texas rulings, Matt and Lawfare Senior Editor Quinta Jurecic decided to bring the gang back together and talk about where the TikTok bans stand with Ramya and Mary-Rose, on this episode of Arbiters of Truth , our series on the information ecosystem. Hosted on Acast. See acast.com/privacy for more information.
Dec 20, 2023
In 2021, the Wall Street Journal published a monster scoop: a series of articles about Facebook’s inner workings , which showed that employees within the famously secretive company had raised alarms about potential harms caused by Facebook’s products. Now, Jeff Horwitz, the reporter behind that scoop, has a new book out, titled “ Broken Code ”— which dives even deeper into the documents he uncovered from within the company. He’s one of the most rigorous reporters covering Facebook, now known as Meta. On this episode of Arbiters of Truth , our series on the information ecosystem Lawfare Senior Editor Quinta Jurecic sat down with Jeff along with Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill—and also someone with close knowledge of Meta from his own time working at the company. They discussed Jeff’s reporting and debated what his findings tell us about how Meta functions as a company and how best to understand its responsibilities for harms traced back to its products. Hosted on Acast. See acast.com/privacy for more information.
Nov 29, 2023
Unless you’ve been living under a rock, you’ve probably heard a great deal over the last year about generative AI and how it’s going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might and might not make a difference when it comes to the political landscape. Luckily, Matt Perault and Scott Babwah Brennen of the UNC Center on Technology Policy have a new report out on just that subject, examining generative AI and political ads. On this episode of Arbiters of Truth , our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Lawfare ’s Fellow in Technology Policy and Law Eugenia Lostri sat down with Matt and Scott to talk through the potential risks and benefits of generative AI when it comes to political advertising. Which concerns are overstated, and which are worth closer attention as we move toward 2024? How should policymakers respond to new uses of this technology in the context of elections? Hosted on Acast. See acast.com/privacy for more information.
Oct 20, 2023
Over the course of the last two presidential elections, efforts by social media platforms and independent researchers to prevent falsehoods from spreading about election integrity have become increasingly central to civic health. But the warning signs are flashing as we head into 2024. And platforms are arguably in a worse position to counter falsehoods today than they were in 2020. How could this be? On this episode of Arbiters of Truth , our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down with Dean Jackson, who previously sat down with the Lawfare Podcast to discuss his work as a staffer on the Jan. 6 committee . He worked with the Center on Democracy and Technology to put out a new report on the challenges facing efforts to prevent the spread of election disinformation. They talked through the political, legal, and economic pressures that are making this work increasingly difficult—and what it means for 2024. Hosted on Acast. See acast.com/privacy for more information.
Oct 5, 2023
Today, we’re bringing you an episode of Arbiters of Truth , our series on the information ecosystem. And we’re discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether it’s going to kill us all—so we’re glad to be able to share an interview that hopefully cuts through some of that noise. Janet Haven is the Executive Director of the nonprofit Data and Society and a member of the National Artificial Intelligence Advisory Committee, which provides guidance to the White House on AI issues. Lawfare Senior Editor Quinta Jurecic sat down alongside Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, to talk through their questions about AI governance with Janet. They discussed how she evaluates the dangers and promises of artificial intelligence, how to weigh the different concerns posed by possible future existential risk to society posed by AI versus the immediate potential downsides of AI in our everyday lives, and what kind of regulation she’d like to see in this space. If you’re interested in reading further, Janet mentions this paper from Data and Society on “Democratizing AI” in the course of the conversation. Hosted on Acast. See acast.com/privacy for more information.
Sep 11, 2023
How much influence do social media platforms have on American politics and society? It’s a tough question for researchers to answer—not just because it’s so big, but also because platforms rarely if ever provide all the data that would be needed to address the problem. A new batch of papers released in the journals Science and Nature marks the latest attempt to tackle this question, with access to data provided by Facebook’s parent company Meta. The 2020 Facebook & Instagram Research Election Study, a partnership between Meta researchers and outside academics, studied the platforms’ impact on the 2020 election—and uncovered some nuanced findings, suggesting that these impacts might be less than you’d expect. Today on Arbiters of Truth , our series on the information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic are joined by the project’s co-leaders, Talia Stroud of the University of Texas at Austin and Joshua A. Tucker of NYU. They discussed their findings, what it was like to work with Meta, and whether or not this is a model for independent academic research on platforms going forward. (If you’re interested in more on the project, you can find links to the papers and an overview of the findings here , and an FAQ, provided by Tucker and Stroud, here .) Hosted on Acast. See acast.com/privacy for more information.
May 12, 2023
Earlier this year, Brian Fishman published a fantastic paper with Brookings thinking through how technology platforms grapple with terrorism and extremism, and how any reform to Section 230 must allow those platforms space to continue doing that work. That’s the short description, but the paper is really about so much more—about how the work of content moderation actually takes place, how contemporary analyses of the harms of social media fail to address the history of how platforms addressed Islamist terror, and how we should understand “the original sin of the internet.” For this episode of Arbiters of Truth , our occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down to talk with Brian about his work. Brian is the cofounder of Cinder, a software platform for the kind of trust and safety work we describe here, and he was formerly a policy director at Meta, where he led the company’s work on dangerous individuals and organizations. Hosted on Acast. See acast.com/privacy for more information.
May 2, 2023
Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content. On this episode of Arbiters of Truth , Lawfare ’s occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996—and they’re skeptical that its protections apply to generative AI. Disclosure: Matt consults on tech policy issues, including with platforms that work on generative artificial intelligence products and have interests in the issues discussed. Hosted on Acast. See acast.com/privacy for more information.
Apr 28, 2023
In 2018, news broke that Facebook had allowed third-party developers—including the controversial data analytics firm Cambridge Analytica—to obtain large quantities of user data in ways that users probably didn’t anticipate. The fallout led to a controversy over whether Cambridge Analytica had in some way swung the 2016 election for Trump (spoiler: it almost certainly didn’t), but it also generated a $5 billion fine imposed on Facebook by the FTC for violating users’ privacy. Along with that record-breaking fine, the FTC also imposed a number of requirements on Facebook to improve its approach to privacy. It’s been four years since that settlement, and Facebook is now Meta. So how much has really changed within the company? For this episode of Arbiters of Truth , our series on the online information ecosystem, Lawfare Senior Editors Alan Rozenshtein and Quinta Jurecic interviewed Meta’s co-chief privacy officers, Erin Egan and Michel Protti, about the company’s approach to privacy and its response to the FTC’s settlement order. At one point in the conversation, Quinta mentions a class action settlement over the Cambridge Analytica scandal. You can read more about the settlement here . Information about Facebook’s legal arguments regarding user privacy interests is available here and here , and you can find more details in the judge’s ruling denying Facebook’s motion to dismiss . Note: Meta provides support for Lawfare ’s Digital Social Contract paper series. This podcast episode is not part of that series, and Meta does not have any editorial role in Lawfare . Hosted on Acast. See acast.com/privacy for more information.
Apr 26, 2023
If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean for a large language model to act with "malice"? Does the First Amendment put any limits on the ability to hold these models, and the companies that make them, accountable for false statements they make? And what's the best way to deal with this problem: private lawsuits or government regulation? On this episode of Arbiters of Truth , our series on the information ecosystem, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, discussed these questions with First Amendment expert Eugene Volokh, Professor of Law at UCLA and the author of a draft paper entitled " Large Libel Models .” Hosted on Acast. See acast.com/privacy for more information.
Apr 14, 2023
Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. There's even talk of banning the app altogether. But would that be legal? In particular, does the First Amendment allow the government to ban an application that’s used by millions to communicate every day? On this episode of Arbiters of Truth , our series on the information ecosystem, Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and Alan Z. Rozenshtein, Lawfare Senior Editor and Associate Professor of Law at the University of Minnesota, spoke with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papendrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law, to think through the legal and policy implications of a TikTok ban. Hosted on Acast. See acast.com/privacy for more information.
Mar 27, 2023
On the latest episode of Arbiters of Truth , Lawfare 's series on the information ecosystem, Quinta Jurecic and Alan Rozenshtein spoke with Ravi Iyer, the Managing Director of the Psychology of Technology Institute at the University of Southern California's Neely Center. Earlier in his career, Ravi held a number of positions at Meta, where he worked to make Facebook's algorithm provide actual value, not just "engagement," to users. Quinta and Alan spoke with Ravi about why he thinks that content moderation is a dead-end and why thinking about the design of technology is the way forward to make sure that technology serves us and not the other way around. Hosted on Acast. See acast.com/privacy for more information.
Mar 9, 2023
During recent oral arguments in Gonzalez v. Google, a Supreme Court case concerning the scope of liability protections for internet platforms, Justice Neil Gorsuch asked a thought-provoking question. Does Section 230, the statute that shields websites from liability for third-party content, apply to a generative AI model like ChatGPT? Luckily, Matt Perault of the Center on Technology Policy at the University of North Carolina at Chapel Hill had already been thinking about this question and published a Lawfare article arguing that 230’s protections wouldn’t extend to content generated by AI. Lawfare Senior Editors Quinta Jurecic and Alan Rozenshtein sat down with Matt and Jess Miers, legal advocacy counsel at the Chamber of Progress, to debate whether ChatGPT’s output constitutes third-party content, whether companies like OpenAI should be immune for the output of their products, and why you might want to sue a chatbot in the first place. Hosted on Acast. See acast.com/privacy for more information.
Feb 1, 2023
You've likely heard of ChatGPT, the chatbot from OpenAI. But you’ve likely never heard an interview with ChatGPT, much less an interview in which ChatGPT reflects on its own impact on the information ecosystem. Nor is it likely that you’ve ever heard ChatGPT promising to stop producing racist and misogynistic content. But, on this episode of Arbiters of Truth , Lawfare ’s occasional series on the information ecosystem, Lawfare editor-in-chief Benjamin Wittes sat down with ChatGPT to talk about a range of things: the pronouns it prefers; academic integrity and the chatbot’s likely impact on that; and importantly, the experiments performed by a scholar name Eve Gaumond, who has been on a one-woman campaign to get ChatGPT to write offensive content. ChatGPT made some pretty solid representations that this kind of thing may be in its past, but wouldn't ever be in its future again. So, following Ben’s interview with ChatGPT, he sat down with Eve Gaumond, an AI scholar at the Public Law Center of the University of Montréal, who fact-checked ChatGPT's claims. Can you still get it to write a poem entitled, “She Was Smart for a Woman”? Can you get it to write a speech by Heinrich Himmler about Jews? And can you get ChatGPT to write a story belittling the Holocaust? Hosted on Acast. See acast.com/privacy for more information.
Jan 23, 2023
Tech policy reform occupies a strange place in Washington, D.C. Everyone seems to agree that the government should change how it regulates the technology industry, on issues from content moderation to privacy—and yet, reform never actually seems to happen. But while the federal government continues to stall, state governments are taking action. More and more, state-level officials are proposing and implementing changes in technology policy. Most prominently, Texas and Florida recently passed laws restricting how platforms can moderate content, which will likely be considered by the Supreme Court later this year. On this episode of Arbiters of Truth , our occasional series on the information ecosystem, Lawfare senior editor Quinta Jurecic spoke with J. Scott Babwah Brennen and Matt Perault of the Center on Technology Policy at UNC-Chapel Hill. In recent months, they’ve put together two reports on state-level tech regulation . They talked about what’s driving this trend, why and how state-level policymaking differs—and doesn’t—from policymaking at the federal level, and what opportunities and complications this could create. Hosted on Acast. See acast.com/privacy for more information.
Dec 15, 2022
On November 19, Twitter’s new owner Elon Musk announced that he would be reinstating former President Donald Trump’s account on the platform—though so far, Trump hasn’t taken Musk up on the offer, preferring instead to stay on his bespoke website Truth Social. Meanwhile, Meta’s Oversight Board has set a January 2023 deadline for the platform to decide whether or not to return Trump to Facebook following his suspension after the Jan. 6 insurrection. How should we think through the difficult question of how social media platforms should handle the presence of a political leader who delights in spreading falsehoods and ginning up violence? Luckily for us, Stanford and UCLA recently held a conference on just that. On this episode of Arbiters of Truth , our series on the online information ecosystem, Lawfare senior editors Alan Rozenshtein and Quinta Jurecic sat down with the conference’s organizers, election law experts Rick Hasen and Nate Persily, to talk about whether Trump should be returned to social media. They debated the tangled issues of Trump’s deplatforming and replatforming … and discussed whether, and when, Trump will break the seal and start tweeting again. Hosted on Acast. See acast.com/privacy for more information.
Dec 12, 2022
When Facebook whistleblower Frances Haugen shared a trove of internal company documents to the Wall Street Journal in 2021, some of the most dramatic revelations concerned the company’s use of a so-called “cross-check” system that, according to the Journal, essentially exempted certain high-profile users from the platform’s usual rules. After the Journal published its report, Facebook—which has since changed its name to Meta—asked the platform’s independent Oversight Board to weigh in on the program. And now, a year later, the Board has finally released its opinion . On this episode of Arbiters of Truth , our series on the online information ecosystem, Lawfare senior editors Alan Rozenshtein and Quinta Jurecic sat down with Suzanne Nossel, a member of the Oversight Board and the CEO of PEN America. She talked us through the Board’s findings, its criticisms of cross-check, and its recommendations for Meta going forward. Hosted on Acast. See acast.com/privacy for more information.
Nov 8, 2022
It’s Election Day in the United States—so while you wait for the results to come in, why not listen to a podcast about the other biggest story obsessing the political commentariat right now? We’re talking, of course, about Elon Musk’s purchase of Twitter and the billionaire’s dramatic and erratic changes to the platform. In response to Musk’s takeover, a great number of Twitter users have made the leap to Mastodon, a decentralized platform that offers a very different vision of what social media could look like. What exactly is decentralized social media, and how does it work? Lawfare senior editor Alan Rozenshtein has a paper on just that, and he sat down with Lawfare senior editor Quinta Jurecic on the podcast to discuss for an episode of our Arbiters of Truth series on the online information ecosystem. They were also joined by Kate Klonick, associate professor of law at St. John’s University, to hash out the many, many questions about content moderation and the future of the internet sparked by Musk’s reign and the new popularity of Mastodon. Among the works mentioned in this episode: “ Welcome to hell, Elon. You break it, you buy it ,” by Nilay Patel on The Verge “ Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve ,” by Mike Masnick on Techdirt Hosted on Acast. See acast.com/privacy for more information.
Oct 13, 2022
The Supreme Court has granted cert in two cases exploring the interactions between anti-terrorism laws and Section 230 of the Communications Decency Act. To discuss the cases, Lawfare editor-in-chief Benjamin Wittes sat down on Arbiters of Truth , our occasional series on the online information ecosystem, with Lawfare senior editors and Rational Security co-hosts Quinta Jurecic, Alan Rozenshtein, and Scott R. Anderson. They discussed the state of 230 law, what the Supreme Court has taken on, what the lower court did, and if there is a right answer here and what it might look like. Hosted on Acast. See acast.com/privacy for more information.
Oct 4, 2022
Today, we’re bringing you another episode of our Arbiters of Truth series on the online information ecosystem. Lawfare senior editor Quinta Jurecic spoke with Mark Bergen, a reporter for Bloomberg News and Businessweek, about his new book, “Like, Comment, Subscribe: Inside YouTube’s Chaotic Rise to World Domination .” YouTube is one of the largest and most influential social media platforms, but Bergen argues that it’s long been “criminally undercovered.” As he tells it, the story of YouTube has a great deal to tell us about the development of the modern attention economy, the promise and pitfalls of the internet, and the struggles of platforms to grapple with their own influence and responsibility. Hosted on Acast. See acast.com/privacy for more information.
Sep 23, 2022
Our Arbiters of Truth series on the online information ecosystem has been taking a bit of a hiatus—but we’re back! On today’s episode, we’re discussing the recent ruling by the U.S. Court of Appeals for the Fifth Circuit in NetChoice v. Paxton , upholding a Texas law that binds large social media platforms to certain transparency requirements and significantly limits their ability to moderate content. The decision is truly a wild ride—so unhinged that it’s difficult to figure out where First Amendment law in this area might go next. To discuss, Lawfare senior editor Quinta Jurecic sat down with fellow Lawfare senior editor Alan Rozenshtein and Alex Abdo, the litigation director at the Knight First Amendment Institute at Columbia University—who’s come on the podcast before to discuss the case. They tried to make sense of the Fifth Circuit’s ruling and chart out alternative possibilities for what good-faith jurisprudence on social media regulation might look like. Hosted on Acast. See acast.com/privacy for more information.
Aug 5, 2022
A few weeks ago on Arbiters of Truth, our series on the online information system, we brought you a conversation with two emergency room doctors about their efforts to push back against members of their profession spreading falsehoods about the coronavirus. Today, we’re going to take a look at another profession that’s been struggling to counter lies and falsehoods within its ranks: the law. Recently, lawyers involved in efforts to overturn the 2020 election have faced professional discipline—like Rudy Giuliani, whose law license has been suspended temporarily in New York and D.C. while a New York ethics investigation remains ongoing. Quinta Jurecic sat down with Paul Rosenzweig a contributing editor at Lawfare and a board member with the 65 Project , an organization that seeks to hold accountable lawyers who worked to help Trump hold onto power in 2020—often by spreading lies. He’s also spent many years working on issues related to legal ethics. So what avenues of discipline are available for lawyers who tell lies about elections? How does the legal discipline process work? And how effective can legal discipline be in reasserting the truth? Hosted on Acast. See acast.com/privacy for more information.
Jul 28, 2022
You’ve likely heard that Elon Musk wanted to buy Twitter… and that he is now trying to get out of buying Twitter… and that at first he wanted to defeat the bots on Twitter… but now he’s apparently surprised that there are lots of bots on Twitter. It's a spectacle made for the headlines, but it's also, at its core, a regular old corporate law dispute. This week on Arbiters of Truth , our series on the online information ecosystem, Evelyn Douek spoke with Adriana Robertson, the Donald N. Pritzker Professor of Business Law at the University of Chicago Law School, to talk about the legal issues behind the headlines. What is the Delaware Court of Chancery in which Musk and Twitter are going to face off? Will it care at all about the bots? And how do corporate lawyers think and talk about this differently from how it gets talked about in most of the public conversation about it? Hosted on Acast. See acast.com/privacy for more information.
Jul 21, 2022
When the Supreme Court handed down its opinion in Dobbs v. Jackson Women’s Health Organization , overturning Roe v. Wade, the impact of the decision on the internet may not have been front of mind for most people thinking through the implications. But in the weeks after the Court’s decision, it’s become clear that the post- Dobbs legal landscape around abortion implicates many questions around not only data and digital privacy, but also online speech. One piece of model state legislation , for example, would criminalize “hosting or maintaining a website, or providing internet service, that encourages or facilitates efforts to obtain an illegal abortion.” This week on Arbiters of Truth , our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Evan Greer, the director of the digital rights organization Fight for the Future. She recently wrote an article in Wired with Lia Holland arguing that “Section 230 is a Last Line of Defense for Abortion Speech Online.” They talked about what role Section 230’s protections have to play when it comes to liability for speech about abortion and what content moderation looks like in a post- Dobbs world. Hosted on Acast. See acast.com/privacy for more information.
Jul 14, 2022
Since the beginning of the pandemic, we’ve talked a lot on this show about how falsehoods about the coronavirus are spread and generated. For this episode, Evelyn Douek and Quinta Jurecic spoke with two emergency medicine physicians who have seen the practical effects of those falsehoods while treating patients over the last two years. Nick Sawyer and Taylor Nichols are two of the cofounders of the organization No License for Disinformation , a group that advocates for medical authorities to take disciplinary action against doctors spreading misinformation and disinformation about COVID-19. They argue that state medical boards, which grant physicians the licenses that authorize them to practice medicine, could play a more aggressive role in curbing falsehoods. How many doctors have been disciplined, and why do Nick and Taylor believe that state medical boards have fallen down on the job? What are the possibilities for more aggressive action—and how does the First Amendment limit those possibilities? And how much good can the threat of discipline do in curbing medical misinformation, anyway? Hosted on Acast. See acast.com/privacy for more information.
Jul 7, 2022
Algorithms! We hear a lot about them. They drive social media platforms and, according to popular understanding, are responsible for a great deal of what’s wrong about the internet today—and maybe the downfall of democracy itself. But … what exactly are algorithms? And, given they’re not going away, what should they be designed to do? Evelyn Douek and Quinta Jurecic spoke with Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI and someone who has thought a lot about what we mean when we say the word “algorithm”—and also when we discuss things like “ engagement ” and “ amplification .” He helped them pin down a more precise understanding of what those terms mean and why that precision is so important in crafting good technology policy. They also talked about what role social media algorithms do and don’t play in stoking political polarization, and how they might be designed to decrease polarization instead. If you’re interested, you can read the Senate testimony by Dean Eckles on algorithms that Jonathan mentions during the show. We also mentioned this article by Daniel Kreiss on polarization. Hosted on Acast. See acast.com/privacy for more information.
Jun 30, 2022
The House committee investigating the Jan. 6 insurrection is midway through a blockbuster series of hearings exploring Donald Trump’s efforts to overturn the 2020 election and disrupt the peaceful transfer of power. Central to those efforts, of course, was the Big Lie—the false notion that Trump was cheated out of victory in 2020. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Kate Starbird, an associate professor of Human Centered Design & Engineering at the University of Washington—and repeat Arbiters of Truth guest. Kate has come on the show before to talk about misinformation and Jan. 6, and she and a team of coauthors just released a comprehensive analysis of tweets spreading misinformation around the 2020 election. So she’s the perfect person with whom to discuss the Jan. 6 committee hearings and misinformation. What does Kate’s research show about how election falsehoods spread, and who spread them? How has, and hasn’t, the Jan. 6 committee incorporated the role of misinformation into the story it’s telling about the insurrection? And is there any chance the committee can break through and get the truth to the people who most need to hear it? Hosted on Acast. See acast.com/privacy for more information.
Jun 23, 2022
If you’ve been watching the hearings convened by the House select committee on Jan. 6, you’ve seen a great deal about how the Trump campaign generated and spread falsehoods about supposed election fraud in 2020. As the committee has argued, those falsehoods were crucial in generating the political energy that culminated in the explosion of the January 6 insurrection. What shape did those lies take, and how did social media platforms attempt to deal with them at the time? Today, we’re bringing you an episode of our Arbiters of Truth series on the online information ecosystem. In fact, we’re rebroadcasting an episode we recorded in November 2020 about disinformation and the 2020 election. In late November 2020, after Joe Biden cemented his victory as the next president but while the Trump campaign was still pushing its claims of election fraud online and in court, Evelyn Douek and Quinta Jurecic spoke with Alex Stamos, the director of the Stanford Internet Observatory. Their conversation then was a great overview of the state of election security and the difficulty of countering false claims around the integrity of the vote. It’s worth a listen today as the Jan. 6 committee reminds us what the political and media environment was like in the aftermath of the election and how the Trump campaign committed to election lies that still echo all too loudly. And though it’s a year and a half later, the problems we’re discussing here certainly haven’t gone away. Hosted on Acast. See acast.com/privacy for more information.
Jun 16, 2022
If you loaded up the internet or turned on the television somewhere in the United States over the last two months, it’s been impossible to avoid news coverage of the defamation trial of actors Johnny Depp and Amber Heard—both of whom sued each other over a dispute relating to allegations by Heard of domestic abuse by Depp. In early June, a Virginia jury found that both had defamed the other. The litigation has received a great deal of coverage for what it might say about the fate of the Me Too movement—but the flood of falsehoods online around the trial raises questions about how useful defamation law can really be in countering lies. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with RonNell Andersen Jones, the Lee E. Teitelbaum Professor of Law at the University of Utah College of Law and an expert on the First Amendment and the interaction between the press and the courts. Along with Lyrissa Lidsky, she’s written about defamation law, disinformation, and the Depp-Heard litigation. They talked about why some commentators think defamation could be a useful route to counter falsehoods, why RonNell thinks the celebrity litigation undercuts that argument, and the few cases in which claims of libel or slander really could have an impact in limiting the spread of lies. Hosted on Acast. See acast.com/privacy for more information.
Jun 9, 2022
On May 31, by a five-four vote, the Supreme Court blocked a Texas law from going into effect that would have sharply limited how social media companies could moderate their platforms and required companies to abide by various transparency requirements. We’ve covered the law on this show before —we recorded an episode right after the U.S. Court of Appeals for the Fifth Circuit allowed Texas to implement the law, in the same ruling that the Supreme Court just vacated. But there’s enough interesting stuff in the Supreme Court’s order—and in Justice Samuel Alito’s dissent—that we thought it was worth another bite at the apple. So this week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic invited Genevieve Lakier, professor of law at the University of Chicago and Evelyn’s colleague at the Knight First Amendment Institute, to walk us through just what happened. What exactly did the Supreme Court do? Why does Justice Alito seem to think that the Texas law has a decent chance of surviving a First Amendment challenge? And what does this suggest about the possible futures of the extremely unsettled landscape of First Amendment law? Hosted on Acast. See acast.com/privacy for more information.
Jun 2, 2022
As transparency reporting about content moderation enforcement has become standard across the platform industry, there's been growing questions about the reliability and accuracy of the reports the platforms are producing. With all reporting being entirely voluntary and the content moderation industry in general being very opaque, it’s hard to know how much to trust the figures that companies report in their quarterly or biannual enforcement reports. As a result, there's been growing calls for independent audits of these figures, and last month, Meta released its first ever independent audit of its content moderation reporting systems. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek sat down with someone who actually knows something about auditing: Colleen Honigsberg, an associate professor of law at Stanford Law School, whose research is focused on the empirical study of corporate and securities law. They talked about how auditors work, the promises and pitfalls of auditing in other contexts and what that might teach us for auditing in the content moderation context, and whether this is going to be a useful regulatory tool. Hosted on Acast. See acast.com/privacy for more information.
May 26, 2022
On May 14, a shooter attacked a supermarket in a historically Black neighborhood of Buffalo, New York, killing ten people and wounding three. The streaming platform Twitch quickly disabled the livestream the shooter had published of the attack—but video of the violence, and copies of the white supremacist manifesto released by the attacker online, continue to circulate on the internet. How should we evaluate the response of social media platforms to the tragedy in Buffalo? This week on Arbiters of Truth , our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Brian Fishman, who formerly worked at Facebook, now Meta, as the policy director for counterterrorism and dangerous organizations. Brian helped lead Facebook’s response to the 2019 Christchurch shooting, another act of far-right violence livestreamed online. He walked us through how platforms respond to crises like these, why it’s so difficult to remove material like the Buffalo video and manifesto from the internet, and what it would look like for platforms to do better. Hosted on Acast. See acast.com/privacy for more information.
May 19, 2022
On May 12, the U.S. Court of Appeals for the Fifth Circuit allowed an aggressive new Texas law regulating social media to go into effect. The law, known as HB20, seeks to restrict large social media platforms from taking down content on the basis of viewpoint—effectively restricting companies from engaging in a great deal of the content moderation that they currently perform. It also imposes a range of transparency and due process requirements on platforms with respect to their content moderation. A group of technology companies challenging the law have filed an emergency application to the Supreme Court seeking to put HB20 back on hold while they continue to litigate the law’s constitutionality under the First Amendment. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Alex Abdo, litigation director at the Knight First Amendment Institute, and Scott Wilkens, senior staff attorney at Knight. The Institute, where Evelyn is a senior research fellow, filed an amicus brief in the Fifth Circuit, taking a middle ground between Texas—which argues that the First Amendment poses no bar to HB20—and the plaintiffs—who argue that the First Amendment prohibits this regulation and many other types of social media regulation besides. So what does the Texas law actually do? Where does the litigation stand—and what will the impact of the Fifth Circuit’s ruling be? And how does the Knight First Amendment Institute interpret, well, the First Amendment? Hosted on Acast. See acast.com/privacy for more information.
May 12, 2022
Internet blackouts are on the rise. Since 2016, governments around the world have fully or partially shut down access to the internet almost 1000 times, according to a tally by the human rights organization Access Now. As the power of the internet grows, this tactic has only become more common as a means of political repression. Why is this and how, exactly, does a government go about turning off the internet? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke on this topic with Peter Guest, the enterprise editor for the publication Rest of World, which covers technology outside the regions usually described as the West. He’s just published a new project with Rest of World diving deep into internet shutdowns—and the three dug into the mechanics of internet blackouts, why they’re increasing and their wide-reaching effects. Hosted on Acast. See acast.com/privacy for more information.
May 5, 2022
While the U.S. Congress has been doing hearing after hearing with tech executives that include a lot of yelling and not much progress, Europe has been quietly working away on some major tech regulations. Last month, it reached agreement on the content moderation piece of this package: the Digital Services Act. It's sweeping in scope and likely to have effects far beyond Europe. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek sat down with Daphne Keller, the director of the Program on Platform Regulation at the Stanford Cyber Policy Center, to get the rundown. What exactly is in the act? What does she like and what doesn't she? And how will the internet look different once it comes into force? Hosted on Acast. See acast.com/privacy for more information.
Apr 28, 2022
This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek spoke to Charlotte Willner, who has been working in content moderation longer than just about anyone. Charlotte is now the executive director of the Trust and Safety Professionals Association , an organization that brings together the professionals that write and enforce the rules for what’s fair game and what’s not on online platforms. Before that, she worked in Trust and Safety at Pinterest and before that she built the very first safety operations team at Facebook. Evelyn asked Charlotte what it was like trying to build a content moderation system from the ground up, what has changed since those early days (spoilers: it’s a lot!) and—of course—if she had any advice for Twitter’s new owner given all her experience helping keep platforms safe. Hosted on Acast. See acast.com/privacy for more information.
Apr 21, 2022
This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with a reporter who has carved out a unique beat writing about not just technology but the creativity and peculiarities of the people who use it—Taylor Lorenz, a columnist at the Washington Post covering technology and online culture. Her recent writing includes reporting on “ algospeak ”—that is, how algorithmic amplification changes how people talk online—and coverage of the viral Twitter account Libs of TikTok, which promotes social media posts of LGBTQ people for right-wing mockery. They talked about the quirks of a culture shaped in conversation with algorithms, the porous border between internet culture and political life in the United States, and what it means to take the influence of social media seriously, for good and for ill. Hosted on Acast. See acast.com/privacy for more information.
Apr 14, 2022
The internet is increasingly emerging as a source for identification and documentation of war crimes, as the Russian invasion of Ukraine has devastatingly proven yet again. But how does an image of a possible war crime go from social media to before a tribunal in a potential war crimes prosecution? On a recent episode of Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Nick Waters, the lead on Justice and Accountability at Bellingcat, about how open-source investigators go about documenting evidence of atrocity. This week on the show, Evelyn and Quinta interviewed Alexa Koenig, the executive director of the Human Rights Center at the University of California, Berkeley, and an expert on using digital evidence for justice and accountability. They talked about how international tribunals have adapted to using new forms of evidence derived from the internet, how social media platforms have helped—and hindered—collection of this kind of evidence, and the work Alexa has done to create a playbook for investigators downloading and collecting material documenting atrocities. Because of the nature of the conversation, this discussion contains some descriptions of violence that might be upsetting for some listeners. Hosted on Acast. See acast.com/privacy for more information.
Apr 7, 2022
We’re taking a look back at one of the stranger stories about social media platforms and the role of the press in the last presidential election. In the weeks before the 2020 election, the New York Post published an “October Surprise”: a set of stories on the business and personal life of Hunter Biden, the son of Democratic presidential candidate Joe Biden, based on emails contained on a mysterious laptop. A great deal was questionable about the Post’s reporting, including to what extent the emails in question were real and how the tabloid had obtained them in the first place. The mainstream press was far more circumspect in reporting out the story—and meanwhile, Twitter and Facebook sharply restricted circulation of the Post’s stories on their platforms. It’s a year and half later. And the Washington Post just published a lengthy report verifying the authenticity of some of the emails on the mysterious laptop—though a lot still remains unclear about the incident. In light of this news, how should we understand Facebook and Twitter’s actions in 2020? Washington Post technology reporter Will Oremus weighed in on this question in his own reflection for the paper. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic asked him on the show to discuss the story. Did the social media platforms go too far in limiting access to the New York Post’s reporting? How did the mainstream press deal with the incident? What have we learned from the failures of how the press and social media responded to information operations around the 2016 election, and what can we learn from how they behaved differently in 2020? Hosted on Acast. See acast.com/privacy for more information.
Mar 31, 2022
This week on Arbiters of Truth, our series on the online information environment, we’re turning our attention to the United Kingdom, where the government has just introduced into Parliament a broad proposal for regulating the internet: the Online Safety Bill . The U.K. government has proclaimed that the Bill represents new “world-first online safety laws” and includes “tougher and quicker criminal sanctions for tech bosses.” So … what would it actually do? To answer this question, Evelyn Douek and Quinta Jurecic spoke with Ellen Judson, a senior researcher at the Centre for the Analysis of Social Media at Demos, a U.K. think tank. Ellen has been closely tracking the legislation as it has developed. And she helped walk us through the tangled system of regulations created by the bill. What new obligations does the Online Safety Bill create, and what companies would those obligations apply to? Why is the answer to so many questions “yet to be defined”—a phrase we kept saying again and again throughout the show—and how much of the legislation is just punting the really difficult questions for another day? What happens now that the bill has been formally introduced in Parliament? Hosted on Acast. See acast.com/privacy for more information.
Mar 24, 2022
Over the last few weeks, we’ve talked a lot about the war in Ukraine on this series—how the Russian, Ukrainian and American governments are leveraging information as part of the conflict; how tech platforms are navigating the flood of information coming out of Ukraine and the crackdown from the Kremlin; and how open-source investigators are documenting the war . This week on Arbiters of Truth, our series on the online information environment, we’re going to talk about getting information into Russia during a period of rapidly increasing repression by the Russian government. Evelyn Douek and Quinta Jurecic spoke with Thomas Kent, a former president of the U.S. government-funded media organization Radio Free Europe/Radio Liberty, who now teaches at Columbia University. He recently wrote an essay published by the Center for European Policy Analysis on “How to Reach Russian Ears,” suggesting creative ways that reporters, civil society and even the U.S. government might approach communicating the truth about the war in Ukraine to Russians. This was a thoughtful and nuanced conversation about a tricky topic—whether, and how, democracies should think about leveraging information as a tool against repressive governments, and how to distinguish journalism from such strategic efforts. Hosted on Acast. See acast.com/privacy for more information.
Mar 17, 2022
Open-source investigations—sometimes referred to as OSINT, or open-source intelligence—have been crucial to public understanding of the Russian invasion of Ukraine. An enormous number of researchers have devoted their time to sifting through social media posts, satellite images, and even Google Maps to track what’s happening in Ukraine and debunk false claims about the conflict. This week on Arbiters of Truth, our series on the online information ecosystem, we devoted the show to understanding how open-source investigations work and why they’re important. Evelyn Douek and Quinta Jurecic spoke to Nick Waters, the lead on Justice and Accountability at Bellingcat, one of the most prominent groups devoted to conducting these types of investigations. They talked about the crucial role played by open-source investigators in documenting the conflict in Syria—well before the war in Ukraine—and how the field has developed since its origins in the Arab Spring and the start of the Syrian Civil War. And Nick walked us through the mechanics of how open-source investigations actually happen, and how social media platforms have helped—and hindered—that work. Hosted on Acast. See acast.com/privacy for more information.
Mar 10, 2022
As Russia’s brutal war in Ukraine continues, tech platforms like Facebook and Twitter have been key geopolitical players in the conflict. The Kremlin has banned those platforms and others as part of a sharp clampdown on freedoms within Russia. Meanwhile, these companies must decide what to do with state-funded Russian propaganda outlets like RT and Sputnik that have accounts on their platforms—and how best to moderate the flood of information, some of it gruesome or untrue, that’s appearing as users share material about the war. This week on Arbiters of Truth , our podcast series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Alex Stamos, director of the Stanford Internet Observatory. They discussed how various platforms, from Twitter to TikTok and Telegram, are moderating the content coming out of Russia and Ukraine right now; the costs and benefits of Western companies pulling operations out of Russia during a period of increasing crackdown; and how the events of the last few weeks might shape our thinking about the nature and power of information operations. Hosted on Acast. See acast.com/privacy for more information.
Mar 3, 2022
Almost immediately since he was banned from Twitter and Facebook in January 2021, Donald Trump has been promising the launch of a new, Trump-run platform to share his thoughts with the world. In February 2022, that network—Truth Social—finally launched. But it’s been a debacle from start to finish, with a lengthy waitlist and a glitchy website that awaits users who finally make it online. Drew Harwell, who covers technology at the Washington Post, has been reporting on the less-than-smooth launch of Truth Social. This week on Arbiters of Truth, our podcast series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with him about who, exactly, this platform is for and who is running it. What explains the glitchy rollout? What’s the business plan … if there is one? And how does the platform fit into the ever-expanding universe of alternative social media sites for right-wing users? Hosted on Acast. See acast.com/privacy for more information.
Feb 24, 2022
Over the last several weeks, Russian aggression toward Ukraine has escalated dramatically. Russian President Vladimir Putin announced on Feb. 21 that Russia would recognize the sovereignty of two breakaway regions in Ukraine’s east, Donetsk and Luhansk, whose years-long effort to secede from Ukraine has been engineered by Russia. Russian troops have entered eastern Ukraine as supposed “peacekeepers,” and the Russian military has taken up positions along a broad stretch of Ukraine’s border. Along with the military dimensions of the crisis, there’s also the question of how various actors are using information to provoke or defuse violence. Russia has been spreading disinformation about supposed violence against ethnic Russians in Ukraine. The United States and its Western partners, meanwhile, have been releasing intelligence about Russia’s plans—and about Russian disinformation—at a rapid and maybe even unprecedented clip. So today on Arbiters of Truth, our series on the online information ecosystem, we’re bringing you an episode about the role of truth and falsehoods in the Russian attack on Ukraine. Evelyn Douek and Quinta Jurecic spoke with Olga Lautman, a non-resident senior fellow at the Center for European Policy Analysis—who has been tracking Russian disinformation in Ukraine—and Shane Harris, a reporter at the Washington Post—who has been reporting on the crisis. Hosted on Acast. See acast.com/privacy for more information.
Feb 17, 2022
Brandon Silverman is a former Facebook executive and founder of the data analytics tool CrowdTangle. Brandon joined Facebook in 2016 after the company acquired CrowdTangle, a startup designed to provide insight into what content is performing well on Facebook and Instagram, and he left in October 2021, in the midst of a debate over how much information the company should make public about its platform. As the New York Times described it , CrowdTangle “had increasingly become an irritant” to Facebook’s leadership “as it revealed the extent to which Facebook users engaged with hyperpartisan right-wing politics and misleading health information.” This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Brandon about what we mean when we talk about transparency from social media platforms and why that transparency matters. They also discussed his work with the Congress and other regulators to advise on what legislation ensuring more openness from platforms would look like—and why it’s so hard to draft regulation that works. Hosted on Acast. See acast.com/privacy for more information.
Feb 10, 2022
The Joe Rogan Experience is perhaps the most popular podcast in the world—and it’s been at the center of a weeks-long controversy over COVID misinformation and content moderation. After Rogan invited on a guest who told falsehoods about the safety of COVID vaccines, outrage mounted toward Spotify, the podcasting and music streaming company that recently signed an exclusive deal with Rogan to distribute his show. Spotify came under pressure to intervene, as nearly 300 experts sent the company a letter demanding it take action, and musicians Neil Young and Joni Mitchell pulled their music from Spotify’s streaming service. And the controversy only seems to be growing. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Ashley Carman, a senior reporter at The Verge who writes the newsletter Hot Pod , covering the podcast and audio industry. She’s broken news on Spotify’s content guidelines and Spotify CEO’s Daniel Ek’s comments to the company’s staff, and we couldn’t think of a better person to talk to about this slow-moving disaster. How has Spotify responded to the complaints over Rogan, and what does that tell us about how the company is thinking about its responsibilities in curating content? What’s Ashley’s read on the state of content moderation in the podcast industry more broadly? And … is this debate even about content moderation at all? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
We talk a lot on this show about the responsibility of major tech platforms when it comes to content moderation. But what about problems the platforms can’t—or won’t—fix? Tracy Chou’s solution involves going around platforms entirely and creating tools that give power back to users to control their own experience. She’s the engineer behind Block Party , an app that allows Twitter users to protect themselves against online harassment and abuse. It’s a fine-tuned solution to a problem that a lot of Twitter users struggle with, especially women and particularly women of color. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Tracy about her work developing Block Party and how the persistent lack of diversity in Silicon Valley contributes to an environment where users have little protection against harassment. They also talked about what it’s like working with the platforms that Block Party and other apps like it are seeking to improve. And they discussed what content moderation problems these kinds of user-driven tools might help solve–and which they won’t. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
As we’ve discussed on the show, online advertisements are the shifting, unstable sand on which the contemporary internet is built. And one of the many, many ways in which the online ad ecosystem is confusing and opaque involves how advertisers can find their ads popping up alongside content they’d rather not be associated with—and, all too often, not having any idea how that happened. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke to Nandini Jammi and Claire Atkin of the Check My Ads Institute . Their goal is to serve as a watchdog for the ad industry, and they’ve just started a campaign to let companies know—and call them out—when their ads are showing up next to content published by far-right figures like Steve Bannon who supported the Jan. 6 insurrection. So what is it about the ads industry that makes things so opaque, even for the companies paying to have their ads appear online? What techniques do Claire and Nandini use to trace ad distribution? And how do advertisers usually respond when Check My Ads alerts them that they’re funding “brand unsafe” content? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
In December 2020, ten state attorneys general sued Google , alleging that the tech giant had created an illegal monopoly over online advertising. The lawsuit is ongoing, and just this January, new allegations in the states’ complaint were freshly unsealed: the states have accused Google of tinkering with its ad auctions to mislead publishers and advertisers and expand its own power in the marketplace. (Google told the Wall Street Journal that the complaint was “full of inaccuracies and lacks legal merit.”) The complaint touches on a crucial debate about the online advertising industry: does it, well, work? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Tim Hwang, Substack’s general counsel and the author of the book “ Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet .” Tim argues that online advertising, which underpins the structure of the internet as we know it today, is a house of cards—that advertisers aren’t nearly as good as they claim at monetizing our attention, even as they keep marketing it anyway. So how worried should we be about this structure collapsing? If ads can’t convince us to buy things, what does that mean about our understanding of the internet? And what other possibilities are there for designing a better online space? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
Valerie Wirtschafter and Chris Meserole, our friends at the Brookings Institution, recently published an analysis of how popular podcasters on the American right used their shows to spread the “big lie” that the 2020 election was stolen from Donald Trump. These are the same issues that led tech platforms to crack down on misinformation in the runup to the election—and yet, the question of whether podcast apps have a responsibility to moderate audio content on their platforms has largely flown under the radar. Why is that? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic talked through this puzzle with Valerie and Chris. They discussed their findings about podcasts and the “big lie,” why it’s so hard to detect misinformation in podcasting , and what we should expect when it comes to content moderation in podcasts going forward. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
One year ago, a violent mob broke into the U.S. Capitol during the certification of the electoral vote, aiming to overturn Joe Biden’s victory and keep Donald Trump in power as the president of the United States. The internet played a central role in the insurrection: Trump used Twitter to broadcast his falsehoods about the integrity of the election and gin up excitement over January 6, and rioters coordinated ahead of time on social media and posted pictures afterwards of the violence. In the wake of the riot, a crackdown by major social media platforms ended with Trump suspended or banned from Facebook, Twitter and other outlets. So how have platforms been dealing with content moderation issues in the shadow of the insurrection? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic sat down for a discussion with Lawfare managing editor Jacob Schulz. To frame their conversation, they looked to the recent Twitter ban and Facebook suspension of Representative Marjorie Taylor Greene—which took place almost exactly a year after Trump’s ban. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
In 2018, a group of academics and free expression advocates convened in Santa Clara, California, for a workshop. They emerged with the Santa Clara Principles on Transparency and Accountability in Content Moderation —a high level list of procedural steps that social media companies should take when making decisions about the content on their services. The principles quickly became influential, earning the endorsement of a number of major technology companies like Facebook. Three years later, a second, more detailed edition of the principles has just been released—the product of a broader consultation process. So what’s changed? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with David Greene, senior staff attorney and civil liberties director at the Electronic Frontier Foundation. At EFF, he’s been centrally involved in the creation of version 2.0 of the principles. They talked about what motivated the effort to put together a new edition and what role he sees the principles playing in the conversation around content moderation. And they discussed amicus briefs that EFF has filed in the ongoing litigation over social media regulation laws passed by Texas and Florida. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
On this show, we’ve discussed no end of proposals for how to regulate online platforms. But there’s something many of those proposals are missing: data about how the platforms actually work. Now, there’s legislation in Congress that aims to change that. The Platform Accountability and Transparency Act , sponsored by Senators Chris Coons, Rob Portman and Amy Klobuchar, would create a process through which academic researchers could gain access to information about the operation of these platforms—peering under the hood to see what’s actually happening in our online ecosystems, and perhaps how they could be improved. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with the man who drafted the original version of this legislation—Nate Persily, the James B. McClatchy Professor of Law at Stanford Law School. He’s been hard at work on the draft bill, which he finally published this October . And he collaborated with Coons, Portman and Klobuchar to work his ideas into the Platform Accountability and Transparency Act. They talked about how Nate’s proposal would work, why researcher access to data is so important and what the prospects are for lasting reforms like this out of Congress. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
We talk a lot about how content moderation involves a lot of hard decisions and trade-offs—but at the end of the day, someone has to make a decision about what stays on a platform and what comes down. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with “The Decider” —Nicole Wong, who earned that tongue-in-cheek nickname during her time at Google in the 2000s. As the company’s deputy general counsel, Nicole was in charge of decisionmaking over what content Google should remove or keep up in response to complaints from users and governments alike. Since then, she moved on to roles as Twitter’s legal director of products and the deputy chief technology officer of the United States under the Obama administration. In that time, the role of social media platforms in shaping society has grown enormously, but how much have content moderation debates really changed? Quinta and Evelyn spoke with Nicole about her time as the Decider, what’s new and what’s stayed the same since the early days of content moderation, and how her thinking about the danger and promise of the internet has changed over the years. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with some of the people behind the app that, by this point in the pandemic, you’re probably sick of: Zoom. Quinta and Evelyn sat down with Josh Kallmer, Zoom’s head of global public policy and government relations, and Josh Parecki, Zoom’s associate general counsel and head of trust and safety. Most of us have used Zoom regularly over the last few years thanks to COVID-19, but while you’re likely familiar with the platform as a mechanism for work meetings and virtual happy hours, you may not have thought about it in the context of content moderation. Josh and Josh explained the kinds of content moderation issues they grapple with in their roles at Zoom, how their moderation and user appeals process works, and why Zoom doesn’t think of itself like a phone line or a mail carrier, services that are almost entirely hands-off when it comes to the content they carry. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
For Thanksgiving, we’re bringing you something a little different—an episode of Rational Security, our light, conversational show about national security and related topics. This week, Alan, Quinta and Scott were joined by special guest, Quinta's co-host of the Arbiters of Truth series on the Lawfare podcast feed Evelyn Douek ! They sat down to discuss: —“Getting Rittenhoused”: A jury recently acquitted 17-year-old Kyle Rittenhouse of murder charges for shooting two men in what he claimed was self-defense during last summer’s unrest. What does his trial and its aftermath tell us about the intersection of politics with our criminal justice system? — “Now That’s a Power Serve”: A global pressure campaign by professional tennis players has forced Chinese officials to disclose the location of Chinese tennis player Peng Shuai , who disappeared after publicly accusing a former senior official of sexual assault. Is this a new model for dealing with Chinese human rights abuses? — “Duck Say Quack and Fish Go Blub—But What Did Fox Say?”: Two prominent conservative commentators have resigned from Fox News over its release of a Tucker Carlson film that they say spreads misinformation and promotes violence. Will this be enough to force the network to curb its behavior? For object lessons, Quinta endorsed her favorite pie dough recipe . Alan in turn made an unorthodox recommendation of what to put in that dough: sweet potato pie . Scott encouraged listeners to follow up that big meal with a cup of coffee, made on his beloved Aeropress with a Prismo filter attachment . And if that doesn't work, Evelyn suggested folks tuck in for a nap with her favorite weighted blanket from Bearaby . Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
It’s been roughly a year since the Facebook Oversight Board opened its doors for business—and while you may mostly remember the Board from its decision on Donald Trump’s suspension from Facebook, but there’s been a lot going on since then. So we thought it was a good time to check in on how this experiment in platform governance is faring. In October, the Board released its first transparency report , and Facebook—now Meta—has published its own update on how it’s been responding to the Board’s decisions and recommendations. Meanwhile, Lawfare is keeping track of developments on our Facebook Oversight Board Blog , run by the inimitable Tia Sewell. On this episode of Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic talked about what the data shows about what cases the Board is taking, how the Board’s role seems to be evolving, and, of course, whether we’re going to have to start calling this the Meta Oversight Board, thanks to Facebook’s name change. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
Content moderation in video games turns out to be just as much of a bummer as content moderation everywhere else, perhaps even more so. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Daniel Kelley, the director of strategy and operations for the Anti-Defamation League’s Center for Technology and Society. He studies how companies deal with the many moderation issues that pop up in gaming, from harassment to digital recreations of violent hate crimes and white nationalist propaganda. And his team at the Anti-Defamation League has a new report out on how players experience abuse—but also joy and connection—while gaming. Quinta and Evelyn asked Daniel to make the case for why everyone, gamers and non-gamers alike, should care about games, why harassment in gaming seems particularly bad compared to non-gaming platforms, and where the gaming industry stands when it comes to investing in content moderation. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
There’s been a lot of news recently about Facebook, and a lot of that news has focused on the frustration of employees assigned to the platform’s civic integrity team or other corners of the company focused on ensuring user trust and safety. If you read reporting on the documents leaked by Facebook whistleblower Frances Haugen, you’ll see again and again how these Facebook employees raised concerns about the platform and proposed solutions only to be shot down by executives. That’s why it’s an interesting time to talk to two former Facebook employees who both worked on the platform’s civic integrity team. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Sahar Massachi and Jeff Allen, who recently unveiled a new project, the Integrity Institute , aimed at building better social media. The goal is to bring the expertise of current and former tech employees to inform the ongoing discussion around if and how to regulate big social media platforms. They dug into the details of what they feel the Institute can add to the conversation, the nitty-gritty of some of the proposals around transparency and algorithms that the Institute has already set out, and what the mood is among people who work in platform integrity right now. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
This week on Arbiters of Truth, our series on the online information ecosystem, we’re talking about a subject that doesn’t come up much on the Lawfare Podcast : the Securities and Exchange Commission. Facebook whistleblower Frances Haugen has made waves with her congressional testimony and the many damaging news stories being reported about Facebook based on the documents she released. But before these documents became the Facebook Papers, Haugen also handed them to the SEC as part of a whistleblower complaint against the company. So, we thought we should dig into what that actually means. What is the likelihood that Haugen’s SEC filings turn into an investigation into the company? Should Facebook be worried? Evelyn Douek and Quinta Jurecic discussed these questions with Jacob Frenkel, who spent years at the SEC and is now the chair of government investigations and securities enforcement at the law firm Dickinson Wright. He explained how to understand the SEC’s role in cases like these, why whistleblowers like Haugen file complaints with the SEC, and why he thinks it’s unlikely that the agency will investigate Facebook based on Haugen’s disclosures. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
On this week’s episode of Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Nick Pickles, Twitter's senior director for global public policy strategy, development and partnerships. They discussed a new paper just released by Twitter, “ Protecting the Open Internet: Regulatory Principles for Policy Makers ”—which sketches out, in broad strokes, the company’s vision for what global technology policy should look like. The paper discusses a range of issues, from transparency to everyone’s favorite new topic, algorithms. As a platform that’s often mentioned in the same breath as Google and Facebook, but is far smaller—with hundreds of millions of users rather than billions—Twitter stands at an interesting place in the social media landscape. How does Twitter define the “open internet,” exactly? How much guidance is the company actually giving to policymakers? And, what does the director of global public policy strategy do all day? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
Facebook whistleblower Frances Haugen’s recent testimony before Congress has set in motion a renewed cycle of outrage over the company’s practices—and a renewed round of discussion around what, if anything, Congress should do to rein Facebook in. But how workable are these proposals, really? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Jeff Kosseff, an associate professor of cybersecurity law at the United States Naval Academy, and the guy that has literally written not just the book on this, but two of them. He is the author of “ The Twenty-Six Words That Created the Internet ,” a book about Section 230, and he has another book coming out next year about First Amendment protections for anonymous speech, titled “The United States of Anonymous.” So Jeff is very well positioned to evaluate recent suggestions that Facebook should, for example, limit the ability of young people to create what users call Finstas, a second, secret Instagram account for a close circle of friends—or Haugen’s suggestion that the government should regulate how Facebook amplifies certain content through its algorithms. Jeff discussed the importance of online anonymity, the danger of skipping past the First Amendment when proposing tech reforms, and why he thinks that Section 230 reform has become unavoidable … even if that reform might not make any legal or policy sense. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
In the last few weeks, the Russian government has been turning up the heat on tech platforms in an escalation of its long-standing efforts to bring the internet under its control. First, Russia forced Apple and Google to remove an app from their app stores that would have helped voters select non-Kremlin-backed candidates in the country’s recent parliamentary elections. Then, the government threatened to block YouTube within Russia if the platform refused to reinstate two German-language channels run by the state-backed outlet RT. And after we recorded this podcast, the Russian government announced that it would fine Facebook for not being quick enough in removing content that Russia identified as illegal. What’s driving this latest offensive, and what does it mean for the future of the Russian internet? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Alina Polyakova, the president and CEO of the Center for European Policy Analysis, and Anastasiia Zlobina, the coordinator for Europe and Central Asia at Human Rights Watch. They explained what this crackdown means for social media platforms whose Russian employees might soon be at risk, the legal structures behind the Russian government’s actions and what’s motivating the Kremlin to extend its control over the internet. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
Just two days ago, on September 28, CNN announced that it was turning off access to its Facebook pages in Australia. Why would the network cut off Facebook users Down Under? It’s not a protest of Facebook or… Australians. CNN’s move was prompted by a recent ruling by the High Court of Australia in Fairfax Media and Voller , which held that media companies can be held liable for defamatory statements made by third parties in the comments on their public pages, even if they didn’t know about them . This is a pretty extraordinary expansion of potential liability for organizations that run public pages with a lot of engagement. On this week’s episode of Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with David Rolph, a professor at the University Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
Today, we’re bringing you another episode of Arbiters of Truth, our series on the online information ecosystem. We’ll be talking about “ The Facebook Files ”—a series of stories by the Wall Street Journal about Facebook’s failures to mitigate harms on its platform. There’s a lot of critical reporting about Facebook out there, but what makes the Journal’s series different is that it’s based on documents from within the company itself—memos from Facebook researchers, identifying problems based on hard data, proposing solutions that Facebook leadership then fails or refuses to implement and contradicts in public statements. One memo literally says, “We are not actually doing what we say we do publicly.” To discuss the Journal’s reporting, Evelyn Douek and Quinta Jurecic spoke with Jeff Horwitz, a technology reporter at the paper who obtained the leaked documents and led the team reporting the Facebook Files. What was it like working on the series? What's his response to Facebook's pushback ? And why is there so much discontent within the company? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
Today, we’re bringing you another episode of Arbiters of Truth, our series on the online information ecosystem. In a 2018 Senate hearing, Facebook CEO Mark Zuckerberg responded to a question about how his company makes money with a line that quickly became famous: “Senator, we sell ads.” And indeed, when you open up your Facebook page—or most other pages on the internet—you’ll find advertisements of all sorts following you around. Sometimes they’re things you might really be interested in buying, even if you’ve never heard of them before—tailored to your interests with spooky accuracy. Other times, they’re redundant or just … weird. Like the aid for a pair of strange plaid pajamas with a onesie-style flap on the bottom that briefly took over the internet in December 2020. Shoshana Wodinsky, a staff reporter at Gizmodo, wrote a great piece explaining how exactly those onesie pajamas made their way to so many people’s screens. She’s one of very few reporters covering the business of online advertisements outside industry publications—so Evelyn Douek and Quinta Jurecic spoke to her this week about what it’s like reporting on ads. How exactly does ad technology work? Why is it that the ad ecosystem gets so little public attention, even as it undergirds the internet as we know it? And what’s the connection between online ads and content moderation ? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
Let’s say you’re a freedom-loving American fed up with Big Tech’s effort to censor your posts. Where can you take your business? One option is Parler—the social media platform that became notorious for its use by the Capitol rioters. Another is Gettr—a new site started by former Trump aide Jason Miller. Unfortunately, both platforms have problems. They don’t work very well. They might leak your personal data. They’re full of spam. And they seem less than concerned about hosting some of the internet’s worst illegal content. Can it be that some content moderation is necessary after all? Today, we’re bringing you another episode of our Arbiters of Truth series on the online information ecosystem. Evelyn Douek and Quinta Jurecic spoke with David Thiel, the big data architect and chief technical officer of the Stanford Internet Observatory. With his colleagues at Stanford, David has put together reports on the inner workings of both Parler and Gettr . They talked about how these websites work (and don’t), the strange contours of what both platforms are and aren’t willing to moderate, and what we should expect from the odd world of “alt-tech.” Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
This week on our Arbiters of Truth series on our online information ecosystem, we’re going to be talking about … disinformation! What else? It’s everywhere. It’s ruining society. It’s the subject of endless academic articles, news reports, opinion columns, and, well, podcasts. Welcome to what BuzzFeed News reporter Joe Bernstein has termed “Big Disinformation.” In a provocative essay in the September issue of Harper’s Magazine, he argues that anxiety over bad information has become a cultural juggernaut that draws in far more attention and funding than the problem really merits—and that the intellectual foundations of that juggernaut are, to a large extent, built on sand. Joe joined Evelyn Douek and Quinta Jurecic to discuss his article and the response to it among researchers and reporters who work in the field. Joe explained his argument and described what it feels like to be unexpectedly cited by Facebook PR . What led him to essentially drop a bomb into an entire discipline? What does his critique mean for how we think about the role of platforms in American society right now? And … is he right? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
When the Taliban seized power following the U.S. withdrawal from Afghanistan this month, major platforms like Facebook and Twitter faced a quandary. What should they do with accounts and content belonging to the fundamentalist insurgency that was suddenly running a country? Should they treat the Taliban as the Afghan government and let them post, or should they remove Taliban content under U.S. sanctions law? If you’re coming at this from the tech sphere, you may have been seeing conversation in recent weeks about how this has raised new and difficult issues for platforms thrust into the center of geopolitics by questions of what to do about Taliban accounts. But, how new are these problems, really? On this week’s episode of our Arbiters of Truth series on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Scott R. Anderson, a senior editor at Lawfare and a fellow at the Brookings Institution, whom you might have heard on some other Lawfare podcasts about Afghanistan in recent weeks. They talked about the problems of recognition and sanctions law that platforms are now running into—and they debated whether or not the platforms are navigating uncharted territory, or whether they’re dealing with the same problems that other institutions, like banks, have long grappled with. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
In October 2020, Facebook sent a cease and desist letter to two New York University researchers collecting data on the ads Facebook hosts on its platform, arguing that the researchers were breaching the company’s terms of service. The researchers disagreed and kept up with their work. On August 3, after months of failed negotiations, Facebook shut off access to their accounts—an aggressive move that journalists and scholars denounced as an effort by the company to shield itself from transparency. For this week’s episode of our Arbiters of Truth series on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Alex Abdo, the litigation director at the Knight First Amendment Institute at Columbia University (where, full disclosure, Evelyn will soon join as a senior research fellow). The Knight Institute is providing legal representation to the two NYU researchers, Laura Edelson and Damon McCoy—and Alex walked us through what exactly is happening here. Why did Facebook ban Edelson and McCoy’s accounts, and what does their research tool, Ad Observer, do? What’s the state of the law, and is there any merit to Facebook’s claims that its hands are tied? And what does this mean for the future of research and journalism on Facebook? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
We live in the Disinformation Age. The internet has revolutionized our information ecosystem and caused disruption totally unprecedented in human history, and democracy may not survive. ... Just like it didn’t survive the television, radio, telegram and printing press before it. Right? When it comes to talking about the internet, all too often history is either completely ignored with bold claims about how nothing like this has ever happened before—or it’s invoked with simple analogies to historical events without acknowledging their very different contexts. As usual, the real answer is more complicated: talking about history can inform our understanding of the dilemmas we face today, but it rarely provides a clear answer one way or another to contemporary problems. This week on our Arbiters of Truth series on our online information ecosystem, Quinta Jurecic spoke with Heidi Tworek, an associate professor at the School of Public Policy and Global Affairs and History at the University of British Columbia. In a recent essay, she made the case for how a nuanced view of history can better inform ongoing conversations around how to approach disinformation and misinformation. So how do current discussions around disinformation leave out or misinterpret history? What’s the difference between a useful historical comparison and a bad one? And why should policymakers care? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
There have been a thousand hot takes about the Facebook Oversight Board, the Supreme Court-like thing Facebook set up to oversee its content moderation. The Board generated so much press coverage when it handed down its decision on Donald Trump’s account that Kaitlyn Tiffany at The Atlantic called the whole circus “ like Shark Week, but less scenic .” Everyone weighed in, from Board Members, to lawmakers , academics, critics and even Lawfare podcast hosts. But there’s a group we haven’t heard much from: the people at Facebook who are actually responsible for sending cases to the Board and responding to the Board’s policy recommendations. Everyone focuses on the Board Members, but the people at Facebook are the ones that can make the Board experiment actually translate into change—or not. So this week for our Arbiters of Truth series on our online information environment, in light of Facebook’s first quarterly update on the Board, Evelyn Douek talked with Jennifer Broxmeyer and Rachel Lambert, both of whom work at Facebook on Facebook’s side of the Oversight Board experiment. What do they think of the first six or so months of the Oversight Board’s work? How do they grade their own efforts? Why is their mark different from Evelyn’s? And, will the Oversight Board get jurisdiction over the metaverse? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
The attempted insurrection on January 6 is back in the headlines. This week, the House select committee investigating the Capitol riot began its work with its very first hearing. So for our Arbiters of Truth series on our online information environment, Evelyn Douek interviewed Quinta Jurecic about social media’s role in warning of the riot. Specifically, they talked about an essay Quinta wrote in Lawfare on the FBI’s failure to examine social media posts announcing plans to storm the Capitol—and how FBI Director Christopher Wray’s explanations don’t hold water. So why does Quinta think Wray has been misleading in his answers to Congress on why the FBI didn’t review those posts from soon-to-be-rioters? What about the First Amendment issues raised by the U.S. government refreshing your Twitter feed? What role is social media playing in the Jan. 6 prosecutions—and what does that say about how tech companies should preserve online evidence of wrongdoing, rather than just taking it down? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
This week we're bringing you the breakdown of the heavyweight bout of the century—a battle over vaccine misinformation. In the left corner we have the White House. Known for its impressive arsenal and bully pulpit, this week it asked for the fight and came out swinging with claims that Facebook is a killer—and not in a good way. In the right corner we have Facebook, known for its ability to just keep taking punches while continuing to grace our screens and rake in the cash. The company has hit back with gusto, saying that Facebook has actually helped people learn the facts on vaccines. Period. Will either of them land a knockout blow? Is this just the first round of many match ups? On this episode of our Arbiters of Truth series on our online information ecosystem, we devote the conversation to the latest slugfest between Facebook and the White House. Evelyn Douek and Quinta Jurecic spoke with Renee DiResta, the research manager at the Stanford Internet Observatory, and Brendan Nyhan, professor of government at Dartmouth College, both of whom have been working on questions of online health misinformation. Let’s get ready to rumble. Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
On May 24, Florida Governor Ron DeSantis signed into law a bill designed to limit how social media platforms can moderate content. Technology companies, predictably, sued—and on June 30, Judge Robert Hinkle of the U.S. District Court for the Northern District of Florida granted a preliminary injunction against the law. The legislation , which purported to end “censorship” online by “big tech,” received a lot of commentary and a great deal of mockery from academics and journalists. Among other things, it included an exemption for companies that operate theme parks. But Alan Rozenshtein argues in a piece for Lawfare that though the law may be poorly written, the issues raised by the litigation are worth taking seriously. This week on our Arbiters of Truth miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Alan—an associate professor of law at the University of Minnesota Law School and a senior editor at Lawfare —about the Florida legislation. What exactly would the law have done, anyway? Why does Alan think the judge underplays the potential First Amendment considerations raised by private companies exerting control over huge swaths of the online public sphere? And what’s with the theme park stuff? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
The news business in America is in crisis. Between 2008 and 2019, newspapers in the U.S. lost half of their newsroom employees . Journalism jobs cut during the pandemic number in the tens of thousands. Local news is suffering the most, with cutbacks across the country and many communities left without a reliable source of information for what’s going on in their area. Why is this a crisis not just for journalists, but also for democracy? In today’s episode of our Arbiters of Truth series on the online information ecosystem, Evelyn Douek and Quinta Jurecic turn to that question with Martha Minow, the 300th Anniversary University Professor at Harvard Law School. She’s written a new book , titled “Saving the News: Why the Constitution Calls for Government Action to Protect Freedom of Speech.” How should we understand the crisis facing American newsrooms? How has the U.S. government historically used its power to create a hospitable environment for news--and how should that history shape our understanding of what interventions are possible today? And what role does the First Amendment play in all this? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
This week on Arbiters of Truth, our podcast on our online information ecosystem, Evelyn Douek and Quinta Jurecic bring you an episode they’ve wanted to record for a while: a conversation with Nathaniel Gleicher, the head of security policy at Facebook. He runs the corner of Facebook that focuses on identifying and tackling threats aimed at the platform, including information operations. They discussed a new report released by Nathaniel’s team on “The State of Influence Operations 2017-2020.” What kinds of trends is Facebook seeing? What is Nathaniel’s response to reports that Facebook is slower to act in taking down dangerous content outside the U.S.? What about the argument that Facebook is designed to encourage circulation of exactly the kind of incendiary content that Nathaniel is trying to get rid of? And, of course, they argued over Facebook’s use of the term “coordinated inauthentic behavior” to describe what Nathaniel argues is a particularly troubling type of influence operation. How does Facebook define it? Does it mean what you think it means? Hosted on Acast. See acast.com/privacy for more information.
Feb 4, 2022
This week on Arbiters of Truth, our podcast on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Camille François, the chief innovation officer at Graphika, about a new report released by her team earlier this month on an apparent Russian influence operation aimed at so-called “alt-tech” platforms, like Gab and Parler. A group linked to the Russian Internet Research Agency “troll farm” has been posting far-right memes and content on these platforms over the last year. But how effective has their effort really been? What does the relatively small scale of the operation tell us about how foreign interference has changed in the last four years? Has the media’s—and the public’s—understanding of information operations caught up to that changing picture? One note: Camille references the “ABC framework” for understanding information operations. That’s referring to a framework she developed where operations can be understood along three vectors: manipulative actors, deceptive behavior and harmful content. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
TikTok has rapidly become one of the most popular apps for teenagers across the world for dancing, lip-syncing and sharing details about their lives. But if you cast your mind back to last year—specifically, August 2020—you may recall that the app’s future in the United States suddenly fell into doubt. The Trump administration began arguing that the app’s ownership by the Chinese company ByteDance raised problems of national security for the United States. ByteDance was ordered to divest from TikTok, and the app, along with the popular China-based chat app WeChat, faced U.S. sanctions. But you might have noticed that your teenager is still making TikTok videos. And President Biden issued his own executive order last week revoking Trump’s sanctions. So, what on earth is happening? On this week’s episode of our Arbiters of Truth series on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke to Bobby Chesney, Lawfare co-founder and Charles I. Francis Professor in Law at the University of Texas School of Law, about what’s happened to TikTok over the past year. Bobby brought us up to speed with the Trump administration’s offensive on TikTok, why the app has survived so far and why TikTok shouldn’t breathe easy just yet about Biden’s executive order. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
If you’ve listened to this show, you've probably read a fair number of news stories—and maybe even listened to some podcast episodes—about the Facebook Oversight Board’s recent ruling on the platform’s decision to ban President Trump’s account. The board temporarily allowed Facebook to keep Trump off the platform, but criticized the slapdash way Facebook made that call and provided a long list of recommendations for Facebook to respond to. Well, now Facebook has responded—announcing that it will ban Trump from the platform for two years. And though the response hasn’t gotten as much coverage as the initial ruling, it’s arguably more important for what it says about both Facebook and the Facebook Oversight Board’s role in the future of content moderation. This week on the Lawfare Podcast 's Arbiters of Truth series on our online information ecosystem, Quinta Jurecic interviewed Lawfare managing editor Jacob Schulz and Arbiters of Truth co-host Evelyn Douek about Facebook’s response to the board. What did Facebook say in addition to its two-year Trump ban? Why is Evelyn grumpy about it ? And what’s next for Facebook, the Oversight Board and Trump himself? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
Way back at the beginning of the Arbiters of Truth podcast series on our online information ecosystem, Evelyn Douek and Quinta Jurecic invited David Kaye to talk about international human rights law (IHRL) and content moderation. David is a clinical professor of law at the University of California, Irvine, and when he was first on the show, he was also the United Nations Special Rapporteur on freedom of expression. It’s been a year and a half since then, and in the intervening time, David’s vision of IHRL as a guiding force for content moderation has become mainstream. So Quinta and Evelyn asked him back on to discuss the increasingly important role played by IHRL in content moderation—and what it really means in practice. They also talked about the rise of digital authoritarianism around the world and what international law and leading democracies can do about it. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
In March 2019, a shooter carried out two mass killings at mosques in Christchurch, New Zealand, livestreaming the first shooting on Facebook. Two months later, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron convened the Christchurch Call—a commitment joined by both governments and technology companies “to eliminate terrorist and violent extremist content online.” It’s now been two years since the Christchurch Call. To discuss those years and what comes next, Evelyn Douek and Quinta Jurecic of the Arbiters of Truth series of the Lawfare Podcast spoke with Dia Kayyali, who serves as a co-chair of the Advisory Network to the Christchurch Call, a group of civil society organizations that work to ensure that the signatories to the Call consider a more diverse range of expertise and perspectives when implementing its commitments. Dia is a long-time digital rights activist and the associate director for advocacy at Mnemonic, an organization that works to preserve online documentation of human rights abuses. What has their experience been like as a voice for civil society in these conversations around the Call? What should we make of the recent decision by the Biden administration to sign the United States on to the call? And what are the risks of potentially over-aggressive moderation in an effort to take down “terrorist” content? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Arbiters of Truth, the Lawfare Podcast 's series on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with the journalist Will Oremus, who until recently was a senior writer at the technology publication OneZero and who is one of the most astute observers of online platforms and their relationship to the media. They dug into Will’s reporting on the social media platform Nextdoor. The app is designed to connect neighbors, but Will argues it’s filling the space left by collapsing local news—which may not be the best development when the platform is struggling with many of the common challenges of content moderation. And, of course, they also talked about the inescapable, ever-present elephant in the room—the Facebook Oversight Board’s ruling on Donald Trump’s account. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
The wait is over. Four months after Facebook indefinitely banned Donald Trump from its platform following the Capitol riot, the Facebook Oversight Board—the platform’s self-appointed quasi-court—has weighed in on whether or not it was permissible for Facebook to do so. And the answer is ... complicated. Mark Zuckerberg can still keep Trump off his platform for now, but the board says that Facebook must review its policies and make a final decision about the former president’s fate within six months. To discuss the decision, Lawfare Editor-in-Chief Benjamin Wittes hosted a special episode of Arbiters of Truth, our Lawfare Podcast miniseries on our online information ecosystem. He sat down with Evelyn Douek, Quinta Jurecic and Lawfare Deputy Managing Editor Jacob Schulz for a conversation about the Oversight Board’s ruling. Did the Oversight Board make the right call? What might the mood be like in Facebook headquarters right now? What about Twitter’s? And is this decision really the Oversight Board’s Marbury v. Madison moment? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
Odds are, you probably haven’t heard of the Israeli government’s “Cyber Unit,” but it’s worth paying attention to whether or not you live in Israel and the Palestinian territories. It’s an entity that, among other things, reaches out to major online platforms like Facebook and Twitter with requests that the platforms remove content. It’s one of a number of such agencies around the globe, which are known as Internet Referral Units. Earlier in April, the Israeli Supreme Court gave a green light to the unit’s activities, rejecting a legal challenge that charged the unit with infringing on constitutional rights. This week on Arbiters of Truth, the Lawfare Podcast ’s miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic talked to Fady Khoury and Rabea Eghbariah, who were part of the legal team that challenged the Cyber Unit’s work on behalf of Adalah, the Legal Center for Arab and Minority Rights in Israel. Why do they—and many other human rights activists–find Internet Referral Units so troubling, and why do governments like the units so much? Why did the Israeli Supreme Court disagree with Fady and Rabea’s challenge to the unit’s activities? And what does the Court’s decision say about the developing relationship between countries’ legal systems and platform content moderation systems? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Arbiters of Truth, the Lawfare Podcast ’s miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic talked to Sean Li, who until recently was the head of Trust and Safety at Discord. Discord is experiencing phenomenal growth and is an established player in a space that is the new hot thing: audio social media. And as the head of Trust and Safety, Sean was responsible for running the team that mitigates all the bad stuff that happens on a platform. Evelyn and Quinta asked Sean what it’s like to have that kind of power—to be the eponymous “arbiter of truth” of a slice of the internet. They also discussed what makes content moderation of live audio content different from the kind we normally talk about—namely, text-based platforms. As almost every social media platform is trying to get into audio, what should they be prepared for? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Arbiters of Truth, the Lawfare Podcast ’s miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Jameel Jaffer and Ramya Krishnan of the Knight First Amendment Institute. What do facial recognition software and President Trump’s erstwhile Twitter habits have in common? They both implicate the First Amendment—and hint at how old doctrines struggle to adapt to new technologies. Evelyn and Quinta talked to Jameel and Ramya about the long-running lawsuit by the Knight Foundation over whether it violates the First Amendment for the president to block people on Twitter—a lawsuit that the Supreme Court just ended. They also asked Ramya and Jameel about the controversial facial recognition startup Clearview AI, in light of recent reporting showing just how much law enforcement uses that technology. Clearview is now confronting multiple lawsuits on the grounds that the company’s practices violate privacy laws, and its defense is that its activities are protected by the First Amendment. These cases don’t neatly fit into existing First Amendment categories, so Evelyn and Quinta asked Jameel and Ramya about the possible paths the law might take to adjust to the digital age. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
If you’re listening to this podcast, the odds are that you’ve heard a lot about QAnon recently—and you might even have read some alarming reporting about how belief in the conspiracy theory is on the rise. But is it really? This week on Arbiters of Truth, the Lawfare Podcast ’s miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Joseph Uscinski, an associate professor of political science at the University of Miami who studies conspiracy theories. He explained why conspiracy theories in America aren’t actually at a new apex, what kinds of people are drawn to ideas like QAnon and what role—if any—social media platforms like Facebook and Twitter should have in limiting the spread of conspiracy theories. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Arbiters of Truth, the Lawfare Podcast ’s miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Issie Lapowsky, a senior reporter at the tech journalism publication Protocol. They discussed last week’s hearing before the House Energy and Commerce Committee with the CEOs of Facebook, Google and Twitter—the first time the companies had been called to testify on the Hill after the Capitol riot, which focused public attention on the content moderation policies of tech platforms when it comes to domestic extremism. The hearing produced some interesting takeaways, but also a lot of moments when the CEOs were awkwardly forced to answer complicated questions with a simple "yes" or "no" answer. They also discussed Issie’s reporting on how tech companies have struggled to figure out how to address far-right extremism in the United States as opposed to Islamist extremism. And they talked about Section 230 reform and what it’s like reporting on the tech space. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Arbiters of Truth, Evelyn Douek and Quinta Jurecic sat down with Brendan Nyhan to discuss the crucial platform that often seems to slip under the radar in discussions of mis- and disinformation: YouTube. Brendan is a professor of government at Dartmouth College, who has just co-authored a report with the Anti-Defamation League on “ Exposure to Alternative and Extremist Content on YouTube .” There’s a common conception that YouTube acts as a radicalization engine, pushing viewers from mainstream content to increasingly radical material. But Brendan and his coauthors found a somewhat different story: YouTube may not funnel all viewers toward extreme content, but it does reliably recommend that content to users who are already viewing it. They discussed his findings and how we should understand the role that YouTube plays in the information ecosystem. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Arbiters of Truth, the Lawfare Podcast ’s miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Daphne Keller, the director of the Program on Platform Regulation at Stanford's Cyber Policy Center and an expert on Section 230 of the Communications Decency Act, the statute that shields internet platforms from civil liability for third-party content on their websites. The statute has been criticized by both Democrats and Republicans, and both President Trump and President Biden separately called for its repeal. So what should we expect in terms of potential revision of 230 during the current Congress? What does Daphne think about the various proposals on the table? And how is it that so many proposals to reform 230 would be foiled by that pesky First Amendment? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Arbiters of Truth, the Lawfare Podcast ’s miniseries on disinformation and misinformation, Evelyn Douek and Quinta Jurecic spoke with Genevieve Lakier, an assistant professor at the University of Chicago Law School and First Amendment expert. It’s basically impossible to have a conversation about content moderation without someone crying “First Amendment!” at some point. But the cultural conception of the First Amendment doesn’t always match the legal conception. Evelyn and Quinta spoke with Genevieve about what First Amendment doctrine actually says, how its history might be quite different from what you think and what the dynamism of the doctrine over time—and the current composition of the Supreme Court—might suggest about the First Amendment’s possible futures for grappling with the internet. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Arbiters of Truth, the Lawfare Podcast ’s miniseries on disinformation and misinformation, Evelyn Douek and Quinta Jurecic spoke with Emily Bell, the founding director of the Tow Center for Digital Journalism at Columbia Journalism School. Emily testified before Congress last week about the role of legacy media, and cable news in particular, in spreading disinformation, but she’s also one of the keenest observers of the online news ecosystem and knows a lot about it from her days as director of digital content for The Guardian. They talked about the relationship between online and offline media in spreading disinformation, the role different institutions need to play in fixing what’s broken and whether all the talk about “fighting misinformation” is a bit of a red herring. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth miniseries on disinformation and misinformation, Evelyn Douek and Quinta Jurecic spoke with Rasmus Kleis Nielsen, the director of the Reuters Institute and professor of political communication at the University of Oxford, about the fight between Australia and Facebook. After Australia proposed a law that would force Facebook to pay for content linked on its platform from Australian news sites, Facebook responded by blocking any news posts in the country. The company and the Australian government have since resolved the spat—for now—but the dust-up raises bigger questions about the relationship between traditional media and social media platforms and the future of the media industry. They talked not only about Australia, but also about the role of social media in contributing to political polarization, the outlook for various business models funding journalism and what political solutions—other than Australia’s—might look like. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
Right now in India, there’s a legal battle that could portend the future of the internet. In this episode of Arbiters of Truth, Lawfare ’s miniseries on disinformation and misinformation, Evelyn Douek and Quinta Jurecic spoke with Chinmayi Arun, a resident fellow at the Information Society Project at Yale Law School and an affiliate of the Berkman Klein Center for Internet & Society at Harvard University. She discussed one of the biggest stories about freedom of expression online today—the battle between Twitter and the Indian government, which has demanded that Twitter geoblock a large number of accounts, including the account of a prominent investigative magazine, in response to protests by tens of thousands of farmers across India. Chinmayi walked us through the political context of the farmers’ protests, how the clash between Twitter and the Indian government is part of an increasingly constrained environment for freedom of expression in India, and where this battle might end up. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Arbiters of Truth, Lawfare ’s miniseries on disinformation and misinformation, Evelyn Douek and Quinta Jurecic spoke with Ben Smith, media columnist for the New York Times and former editor-in-chief of BuzzFeed News. Ben spends a lot of time thinking and writing about the gatekeepers who hold the power to shape our public sphere. At BuzzFeed, he capitalized on the way the rise of the internet allowed upstarts to work around the Old Gatekeepers, the legacy media organizations; now, at the Times, he’s one of them. But there are also the other New Gatekeepers: the Platforms, flailing around as much as the rest of us in trying to make sense of the role they’ve found themselves in. So what does Ben think about the current state of the media ecosystem and where it's headed? And why, in his view, was February 26, 2015—almost exactly 6 years ago—the last good day on the internet? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Arbiters of Truth, Lawfare ’s miniseries on disinformation and misinformation, Quinta Jurecic sat down with Lawfare ’s deputy managing editor Jacob Schulz, and Jordan Schneider, host of the ChinaTalk podcast , to talk about Substack. The newsletter service is the new cool thing in the journalism world—and, like any newly popular online service, it is already running into questions around content moderation. Jacob wrote about Substack’s content moderation policy earlier this month, and Jordan uses Substack to send out his ChinaTalk newsletter , so he filled us in on the platform’s nuts and bolts. Why is Substack so popular right now, anyway? Does it help writers step outside the unhealthy dynamics that help spread disinformation and discontent on social media, or does it just play into those dynamics further? And what might the platform’s content moderation policies leave to be desired? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
For this episode of Arbiters of Truth, Lawfare ’s miniseries on disinformation and misinformation, Kate Klonick and Quinta Jurecic spoke with Joan Donovan, the research director at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School. Her work focuses on networked social movements, disinformation and media manipulation—so she’s the perfect person to help untangle the continued fallout not only from the January 6 Capitol riot, but from the last four years more broadly. They talked about Joan’s route from researching Occupy Wall Street to studying far-right disinformation, the importance of understanding networks of communication and coordination in studying social media, and the responses of big social platforms to the violence in the Capitol. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
During his inaugural address yesterday, President Biden spoke about the subject of this podcast: disinformation. “There is truth and there are lies,” Biden said, “lies told for power and for profit.” And he asked Americans to unify rather than “turn inward” against those “who don't get their news from the same sources you do.” But in an era of QAnon and pandemic disinformation, how will that unification be possible? The day before the inauguration, Evelyn Douek and Quinta Jurecic spoke with Kate Starbird, an associate professor of Human Centered Design & Engineering at the University of Washington, for this first episode of Lawfare 's Arbiters of Truth miniseries under the Biden administration. Kate last came on the podcast in March 2020 to discuss disinformation and misinformation around the coronavirus, and she has had a long year since then researching online ecosystems around the pandemic and supposed voter fraud. And the Capitol riot on January 6 threw all this into sharp relief, as the things that Kate studies every day boiled over into mainstream consciousness with a vengeance. Evelyn and Quinta spoke with Kate about what led up to the riot, what the disinformation landscape looks like now and what kind of work will be required to move forward. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
Yesterday, January 13, the House of Representatives impeached President Trump a second time for encouraging the violent riot in the Capitol Building on January 6. And yet, the impeachment is probably less of a crushing blow to the president than something else that’s happened in recent days: the loss of his Twitter account. After a few very eventful weeks, Lawfare 's Arbiters of Truth series on disinformation is back. Evelyn Douek and Quinta Jurecic spoke with Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School, about the decision by Twitter, Facebook and a whole host of other platforms to ban the president in the wake of the Capitol riot. Jonathan, Evelyn and Quinta take a step back and situate what’s happening within the broader story of internet governance. They talked about how to understand the bans in the context of the internet’s now not-so-brief history, how platforms make these decisions and, of course, Section 230 of the Communications Decency Act. Listeners might also be interested in Zittrain's February 2020 Tanner Lecture , "Between Suffocation and Abdication: Three Eras of Governing Digital Platforms," which touches on some of the same ideas discussed in the podcast. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Jaime Longoria, an investigative researcher at First Draft, who monitors information disorder in Latino or Latinx communities in the United States and in Latin America. In the run-up to the 2020 U.S. election, there was an explosion of press stories about mis- and dis-information in Spanish-speaking communities. But this is hardly a new phenomenon. They talked with Jaime about the long-standing and ongoing information disorder in these communities, how it is or isn’t distinctive, why it tends to go under the radar in public conversation and what can be done about it. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Claire Wardle, the co-founder and leader of the nonprofit organization First Draft and a research fellow at Harvard University’s Shorenstein Center. First Draft recently released a report on the information environment around the development of vaccines for COVID-19, and Claire talked about what she and her team found in terms of online discussion of the vaccine in English, Spanish and French. What kinds of misinformation should we be ready for as vaccines begin to be administered across the world? Why might fact-checking and labeling by platforms not be effective in countering that misinformation? And why is Claire still pessimistic about the progress that platforms and researchers have made in countering dis- and misinformation over the last four years? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on platforms and disinformation, Quinta Jurecic spoke with Alina Polyakova and Ambassador Daniel Fried, the former U.S. ambassador to Poland and the Weiser Family Distinguished Fellow at the Atlantic Council. The two have a new paper out on “ Democratic Offense Against Disinformation ,” published by the Atlantic Council and the Center for European Policy Analysis. They have written previously on how democracies can defend themselves against disinformation and misinformation from abroad, but this time, they turned their attention to what it would mean for democracies to take the initiative against foreign purveyors of disinformation, rather than just playing defense. So how effective are democracies at countering disinformation? What tools are available if they want to play offense? And is it even possible to do so without borrowing tactics from the same authoritarian regimes that democracies seek to counter? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on platforms and disinformation, Evelyn Douek and Quinta Jurecic spoke with Nick Rasmussen, the Executive Director of the Global Internet Forum to Counter Terrorism (also known as GIFCT). The GIFCT is an organization working to facilitate cross-industry efforts to counter the spread of terrorist and violent extremist content online. It was founded in 2017 by four platforms, but is now transitioning to a new life as an independent organization, which Nick is heading up. Online violent extremism is one of the most difficult problems of the internet age, and collaboration between companies and governments may be the only way to effectively tackle it. But how can the GIFCT balance this with the need to respect legitimate free speech concerns? How is Nick thinking about the transparency and accountability problems that such collaboration might exacerbate? And why might the GIFCT be one of the most important institutions for the future of online free speech? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic are bringing you a conversation with Alex Stamos, the director of the Stanford Internet Observatory. Alex was last on the show in August to talk about the newly established Election Integrity Partnership, which he helped set up to focus on detecting and mitigating disinformation around the U.S. 2020 election. Well, the election is over! So Alex is back to talk about what the partnership saw, how well the information ecosystem held up and what the landscape looks like as the dust begins to settle. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke to Marietje Schaake about how Europe is not necessarily waiting for America to get its act together and is moving ahead with tech regulation. Marietje served as a Member of European Parliament for 10 years for the Dutch liberal democratic party and is now the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. They spoke about what’s happening in Europe in the tech space, what distance there may be between European and American ideas about regulation of tech platforms, and whether that distance is bridgeable—especially under a Biden administration. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Casey Newton, veteran Silicon Valley editor for The Verge who recently went independent to start a newsletter on Substack called Platformer. Few people have followed the stories of platforms and content moderation in recent years as closely and carefully as Casey, so Evelyn and Quinta asked him about what’s changed in the last four years—especially in the lead-up to the election. They also spoke about the challenges of reporting on the tech industry and whether the increased willingness of platforms to moderate content means that the name of this podcast series will have to change. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Janine Zacharia, the Carlos Kelly McClatchy Lecturer in Stanford’s Department of Communication, and Andrew Grotto, director of the Program on Geopolitics, Technology and Governance and the William J. Perry International Security Fellow at Stanford’s Cyber Policy Center. In 2016, a key part of the Russian influence campaign involved the hacking and leaking of emails belonging to the Democratic Party and Clinton campaign chairman John Podesta. Journalists at mainstream news outlets rushed to write up the emails without giving adequate context to how they had been obtained. So how can the press avoid a similar disaster in 2020? Zacharia and Grotto teamed up in recent months to write a playbook for reporters facing the dilemma of writing about hacked material or disinformation without participating in a disinformation campaign. (They’ve also written an article on the subject for Lawfare .) They spoke with Alina and Quinta about their recommendations for reporters, what the American press might be able to learn from colleagues abroad and how to assess the mainstream media’s response to the New York Post’s bizarre reporting on Hunter Biden. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek spoke with Maria Ressa, a Filipino-American journalist and co-founder of Rappler, an online news site based in Manila. Maria was included in Time's Person of the Year in 2018 for her work combating fake news, and is currently fighting a conviction for “cyberlibel” in the Philippines for her role at Rappler. Maria and her fight are the subject of the film, “A Thousand Cuts,” released in virtual cinemas this summer and to be broadcast on PBS Frontline in early next year. As a country where Facebook is the internet, the Philippines was in a lot of ways ground zero for many of the same dynamics and exploitations of social media that are currently playing out around the world. What is the warning we need to take from Maria’s experience and the experience of Philippine democracy? Why is the global south both the beta test and an afterthought for companies like Facebook? And how is it possible that Maria is still, somehow, optimistic? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Yochai Benkler, a professor at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society. With only weeks until Election Day in the United States, there’s a lot of mis- and disinformation flying around on the subject of mail-in ballots. Discussions about addressing that disinformation often focus on platforms like Facebook or Twitter. But a new study by the Berkman Klein Center suggests that social media isn’t the most important part of mail-in ballot disinformation campaigns—rather, traditional mass media like news outlets and cable news are the main vector by which the Republican Party and the president have spread these ideas. So what’s the research behind this counterintuitive finding? And what are the implications for how we think about disinformation and the media ecosystem? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic talked about how everything is on fire—not metaphorically, but literally. In recent months, wildfires in the American West have caused unprecedented devastation and forced thousands of people to evacuate their homes. And along with the fires, the West has been grappling with a surge of false material circulating online about the flames. But this isn’t the first time wildfires and disinformation have gone together. This past December and January, Australia was hit with both a brutal bushfire season and a similar wave of disinformation and misinformation about what sparked the fires and the role of climate change. Evelyn and Quinta spoke about the offline and online conflagrations on both sides of the Pacific with Charlie Warzel of the New York Times and Cam Wilson, a reporter for Gizmodo Australia and Business Insider Australia. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth miniseries on disinformation, Evelyn Douek and Quinta Jurecic spoke to Nina Jankowicz, a disinformation fellow at the Wilson Center, about her new book: “ How to Lose the Information War: Russia, Fake News, and the Future of Conflict .” The book chronicles Nina’s journey around Europe, tracing down how information operations spearheaded by Russia have played out in countries in the former Soviet bloc, from Georgia to the Czech Republic. What do these case studies reveal about disinformation and how best to counter it—and how many of these lessons can be extrapolated to the United States? How should we understand the role of locals who get swept up in information operations, like the Americans who attended rallies in 2016 that were organized by a Russian troll farm? And what is an information war, anyway? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Ben Nimmo, the director of investigations at Graphika. Ben has come on the podcast before to discuss how he researches and identifies information operations, but this time, he talked about one specific information operation: a campaign linked to the Internet Research Agency “troll farm.” Yes, that’s the same Russian organization that Special Counsel Robert Mueller pinpointed as responsible for Russian efforts to interfere in the 2016 election on social media. They’re still at it, and Graphika has just put out a report on an IRA-linked campaign that amplified content from a fake website designed to look like a left-wing news source. Ben, Alina and Quinta discussed what Graphika found, how the IRA’s tactics have changed since 2016 and whether the discovery of the network might represent the rarest of things on the disinformation beat—a good news story. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Alissa Starzak, the head of public policy at Cloudflare—a company that provides key components of the infrastructure that helps websites stay online. They talked about two high-profile incidents in which Cloudflare decided to pull its services from websites publishing or hosting extremist, violent content. In August 2017, after the white nationalist rally in Charlottesville, Cloudflare’s CEO Matthew Prince announced that he would no longer be providing service to the Neo-Nazi website the Daily Stormer. Two years later, Cloudflare also pulled service from the forum 8chan after the forum was linked to a string of violent attacks. They talked about what Cloudflare actually does and why blocking a website from using its services has such a big effect. They also discussed how Cloudflare—which isn’t a social media platform like Facebook or Twitter—thinks about its role in deciding what content should and shouldn’t stay up. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Emma Llansó, the director of the Free Expression Project at the Center for Democracy and Technology (CDT). They discussed the Global Internet Forum, or GIFCT, a consortium which houses a shared database of content that platforms use to remove terrorism-related material. Emma makes the case for why it’s worth paying attention to—and why she finds it concerning. They also talked about CDT’s lawsuit against President Trump over his recent executive order aiming to constrain platforms’ leeway to moderate content, which the CDT is arguing violates the First Amendment. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Alex Stamos, the director of the Stanford Internet Observatory and former chief security officer of Yahoo and Facebook. Alex has appeared on the podcast before, but this time, they discussed a new coalition he helped set up called the Election Integrity Partnership —a coalition focused on detecting and mitigating attempts to limit voting or delegitimize election results. Disinformation and misinformation around the U.S. presidential election has already started popping up online, and it’s only going to increase as November draws closer. The coalition aims to counter this in real time. So how will it actually work? They also asked Alex for his hot takes on TikTok—the popular video sharing platform facing pressure over concern about influence from the Chinese government. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on our Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Shane Huntley, the director of Google’s Threat Analysis Group—a team that leads Google’s efforts to track threats from nation states and hacker groups. If you’ve ever received a notification from Google that a state-sponsored actor is trying to access your email account, you’ve heard from the Threat Analysis Group. The group examines everything from attempts to steal cryptocurrency to what Google calls “coordinated influence campaigns.” Recently, the Threat Analysis Group has begun putting out blog posts with updates on their work against coordinated influence campaigns. Alina and Quinta asked Shane about his “ bulletin ” for the first quarter of 2020, but since they spoke, Google has published another post for the second quarter—detailing actions against campaigns from Iran, Russia and China. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on our Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Ben Collins and Brandy Zadrozny, reporters at NBC News. Writing at NBCNews.com , they report on disinformation and misinformation in health and politics. Their work covers a lot of ground, but for this episode, they discussed one increasingly prominent issue on that beat: QAnon, a conspiracy theory built around anonymous posts on an internet forum claiming that Donald Trump is waging war against a deep state and a vast network of child sex traffickers. The conspiracy theory has inspired acts of violence and is becoming increasingly mainstream, with several candidates for U.S. Congress being QAnon believers. They talked about how QAnon started, why we need to take it seriously and how the internet—and big technology platforms—have allowed the theory to spread. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Kate Klonick and Quinta Jurecic spoke with Jillian C. York, the director for International Freedom of Expression at the Electronic Frontier Foundation. She’s been an activist working on issues of internet freedom and free expression for many years, which gives her a unique perspective on debates over disinformation and platform governance. Jillian and Kate discussed Facebook’s Oversight Board—the entity designed to provide accountability for the platform’s content moderation decisions—whose development they have watched closely, and about which Kate has written a recent article. They also discussed why Jillian thinks content moderation is broken, what technology companies could do better and how discussions of platform governance tend to focus on the United States to the exclusion of much of the rest of the world. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Hany Farid, a professor at the University of California, Berkeley, whose work focuses on analyzing and identifying altered photo and video—what’s known as digital image forensics. Recently, he has done work on deep fakes—realistic synthetic media in which a person’s likeness is altered to show them doing or saying something they never did or said. He’s also helped develop technology used by platforms to identify and remove material related to child sexual abuse. They talked about how dangerous deep fakes really are, how much of that danger is the technology itself and how much of it has to do with how big platforms amplify incendiary content, and whether platforms should moderate disinformation and misinformation in the same aggressive way they take down sexually abusive material. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on our Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Jane Lytvynenko, a senior reporter at BuzzFeed News who focuses on disinformation. If you use Twitter regularly and have looked at the platform during any major media events—disasters, protests, you name it—you’ve likely seen her enormous tweet threads where she debunks hoaxes and misinformation. Recently, she’s turned her debunking skills toward misinformation and disinformation around the coronavirus pandemic, reporting on the various “ fake experts ” peddling misleading stories about the virus and the long half-life of the conspiratorial “Plandemic” video . She’s also written on the rise of “ disinformation for hire ”—PR firms that turn to disinformation as a marketing tool. So what is it like to report on disinformation and misinformation in real time? How can journalists help readers understand and spot that bad information? And, is there any cause to be optimistic? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
In this episode of our Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Brandi Collins-Dexter, the senior campaign director at the advocacy organization Color of Change and a visiting fellow at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School of Government. She recently published a report with the Shorenstein Center on “ Canaries in the Coal Mine: COVID-19 Misinformation and Black Communities ,” tracing how different false narratives about the pandemic surfaced among Black social media users in the United States. So what makes this misinformation unique and especially dangerous? And how should the responses of technology companies account for the ways the Black community is particularly vulnerable to this kind of misinformation? They also discussed Color of Change’s role in the #StopHateForProfit campaign, an ad boycott of Facebook in protest of the company’s handling of potentially harmful speech on its platform. The day after this podcast was recorded, Color of Change and other activists met with Facebook to discuss the campaign, but they walked away feeling that nothing much had changed . Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Darius Kazemi, an internet artist and bot-maker extraordinaire. Recently, there have been a lot of ominous headlines about bots—including an NPR article stating that nearly 50 percent of all Twitter commentary about the pandemic has been driven by bots rather than human users. That sounds bad—but Darius thinks that we shouldn’t be so worried about bots . In fact, he argues, a great deal of reporting and research on bots is often wrong and actually causes harm by drumming up needless worry and limiting online conversations. So, what is a bot, anyway? Do they unfairly take the blame for the state of things online? And if weeding out bot activity isn’t a simple way to cultivate healthier online spaces, what other options are there for building a less unpleasant internet? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
In this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Whitney Phillips and Ryan Milner, authors of the new book, “ You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape .” Phillips is an assistant professor in Communications and Rhetorical Studies at Syracuse University, and Milner is an associate professor of Communication at the College of Charleston. In “You Are Here,” they look at the uniquely disorienting aspects of the current online information environment and how that is exacerbated by aspects of “internet culture” that don’t make sense from the outside. They discussed the challenges for journalists in understanding and reporting on that culture and how that can fuel information pollution, how the internet got to this point where everything is so polluted, and, of course, what QAnon has to do with it. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Laura Rosenberger, the director of the Alliance for Securing Democracy and a senior fellow at The German Marshall Fund of the United States. When it comes to information operations, most Americans probably think of Russia as the primary culprit. After all, the memory of Russia’s interference in the 2016 presidential election is still fresh. But over the past year, Chinese information operations have gained prominence with the Chinese Communist Party involved in aggressive online campaigns regarding unrest in Hong Kong and the ongoing pandemic. They talked about how the Chinese government wields information online, how Chinese tactics are different from Russian tactics in the information space and how democracies should respond. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Eileen Donahoe, the executive director of the Global Digital Policy Incubator at Stanford University. There’s no shortage of controversies roiling right now about free expression and the future of the internet—from platforms aggressively removing misinformation about the ongoing pandemic, to President Trump’s executive order targeting Section 230 of the Communications Decency Act. Eileen, Quinta and Alina take a step back and review the landscape of online speech as a whole to get a more holistic sense of what things look like right now and where platforms and governments might be headed when it comes to regulating speech. They talked about the various debates over content moderation taking place within the United States and around the world, and Eileen made the case for why international human rights law should be used as the framework for both protecting and moderating online speech. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
In this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Ryan Merkley, the chief of staff to the executive director of the Wikimedia Foundation. We’ve spent a lot of time on this podcast discussing how social media platforms have handled issues of disinformation and misinformation. But what about Wikipedia? It’s a massive online encyclopedia written and edited entirely by volunteers—so, not a platform, but still an online service grappling with a wave of untruths in an uncertain time. Ryan, Evelyn and Quinta talked about Wikipedia’s unique structure, how the site has managed to become a reliable resource on an often untrustworthy internet, and how readers, writers and editors of Wikipedia are navigating the need for information amidst both the pandemic and ongoing protests over police abuse of Black Americans. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
In this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Gabrielle Lim, a researcher with the Technology and Social Change Research Project at Harvard Kennedy School’s Shorenstein Center and a fellow with Citizen Lab. Lim just released a new report with Data and Society on the fascinating story of a Malaysian law ostensibly aimed at stamping out disinformation. The Anti-Fake News Act, passed in 2018, criminalized the creation and dissemination of what the Malaysian government referred to as “fake news.” After a new government came into power following the country’s 2018 elections, the law was quickly repealed. But the story of how Malaysia’s ruling party passed the act, and how Malaysian civil society pushed back against it, is a useful case study on how illiberal governments can use the language of countering disinformation to clamp down on free expression, and how the way democratic governments talk about disinformation has global effects. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Deen Freelon, an associate professor at the University of North Carolina Hussman School of Journalism and Media. Deen’s work focuses on data science and political expression on social media, and they discussed research he conducted on tweets from the Internet Research Agency troll farm and their attempts to influence U.S. politics, including around the 2016 election. In a recent article , Deen and his coauthors found that IRA tweets from accounts presenting themselves as Black Americans received particularly high engagement from other users on Twitter—which raises interesting questions about the interaction of race and disinformation. They also talked about what the data show on whether the IRA actually succeeded in changing political beliefs and just how many reporters quoted IRA trolls in their news reports without realizing it. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this week's episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek spoke with Craig Silverman, the media editor for Buzzfeed News and one of the leading journalists covering the disinformation beat. Craig is credited with coining the phrase “Fake News.” Evelyn spoke with him about how he feels about that, especially now that the phrase has taken on a life of its own. They also talked about a book Craig edited, the second edition of the " Verification Handbook ,” available online now, that equips journalists with the tools they need to verify the things they see online. Journalism and reporting on disinformation has never been so important—but the internet has never been so chaotic, and journalists are not only observers of disinformation, but also targets of it. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
For this week's episode of our Arbiters of Truth series on disinformation, Evelyn Douek and Alina Polyakova talked to Aric Toler of Bellingcat, a collective that has quickly become the gold-standard for open source and social media investigations. Aric recently published a blog post in response to a New York Times article on Russian influence campaigns—one retweeted by former President Barak Obama no less—that Aric called “ How Not to Report on Disinformation .” Evelyn and Alina asked him about the article and what exactly Aric thought was wrong with it as a case study in the challenges for reporters writing about disinformation operations. When are reporters helping to uncover threats to democracy, and when are they giving oxygen to fringe actors? Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
In this episode of Lawfare 's Arbiters of Truth miniseries on disinformation, Quinta Jurecic and Alina Polyakova spoke with Thomas Rid about his new book, "Active Measures: The Secret History of Disinformation and Political Warfare." Yesterday’s episode of the Lawfare Podcast featured a conversation between Thomas and Jack Goldsmith about the book, focusing on the early history of disinformation through the 1980s. In this episode, Alina and Quinta follow up with a discussion with Thomas on disinformation in the digital age, along with some questions about what it’s like to interview former KGB and Stasi officials about their influence campaigns. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Kate Klonick and Quinta Jurecic spoke with Charlie Warzel, an opinion writer at large at the New York Times. He’s written about the internet, disinformation, privacy and platform governance—and recently he’s been focusing on how these collide with COVID-19 and the uncertainty and anxiety of living through a pandemic. They talked about what the pandemic shows us about the role of big tech companies and how the spread of a deadly disease in the midst of a polarized information environment may be a worst-case scenario for disinformation. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Camille François, the Chief Innovation Officer at Graphika, where she works to identify and mitigate disinformation and misinformation online. On April 15, Graphika released a report on an Iranian influence operation focused on COVID-19, an operation blaming the United States for supposedly creating the virus and praising China’s response to the pandemic. Camille discussed what Graphika found and how this campaign compares to similar operations in the past—like another campaign from Ghana that Graphika helped uncover , which was linked to Russia and posted content aimed at black Americans. And they discussed the “ ABC framework ” that Camille has developed to understand disinformation campaigns. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Quinta Jurecic speaks with Alina Polyakova and Kate Klonick, who both have expertise that can clarify our confusing current moment. Alina has been running a great series of virtual events at the Center for European Policy Analysis on disinformation and geopolitics during COVID-19. And Kate’s research on platform governance helps shed light on the aggressive role some tech platforms have been playing in moderating content online during the pandemic. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of the Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Nate Persily, the James B. McClatchy Professor of Law at Stanford Law School. Persily is also a member of the Kofi Annan Commission on Democracy and Elections in the Digital Age, which recently released a report on election integrity and the internet for which Nate provided a framing paper . Alongside his work on internet governance, Nate is also an expert on election law and administration. They spoke about the commission report and the challenges the internet may pose for democracy, to what extent the pandemic has flipped that on its head, and, of course, the 2020 presidential election. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Baybars Örsek, the Director of the International Fact-Checking Network at the Poynter Institute. Fact-checking has become newly prominent in recent years, as fact-checkers work to counter surges of online disinformation and misinformation. And it’s more important than ever right now in the middle of a pandemic, when incorrect information circulating online has immediate consequences for people’s health. Baybars has been on the front lines of fact-checking in recent years. Quinta and Evelyn spoke with him about the IFCN’s “ Fact-Checkers’ Code of Principles ,” Facebook’s partnership with fact-checkers for content shared on their platforms, and why fact-checking is so important right now. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Kate Starbird, an Associate Professor of Human Centered Design & Engineering at the University of Washington. She’s long done fascinating research about online disinformation and misinformation—and she's an expert in what’s called crisis informatics , or the study of how information flows during crisis events. For this conversation, they focused on one crisis in particular: Covid-19. They talked about the possibilities and dangers of social media and the internet in times of crisis, how communities make sense of disaster, and the anxiety of living in the world right now. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare ’s Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Joshua R. Fattal about a fascinating law review article he’s written: “FARA on Facebook: Modernizing the Foreign Agents Registration Act to Address Propagandists on Social Media.” The Foreign Agents Registration Act, known as FARA, is an American law that requires lobbyists for foreign entities to register with the Justice Department. It made the headlines when Special Counsel Robert Mueller claimed that Russians spreading social media disinformation around the 2016 election failed to register under the law. Josh argues that Mueller’s indictments represent an innovative new use of FARA—and he suggests that the law could offer a mechanism for the U.S. government to address disinformation campaigns. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare ’s Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Bridget Barrett and Daniel Kreiss of the UNC Hussman School of Journalism and Media and UNC’s Center for Information, Technology, and Public Life. In all the controversy around social media platforms at the moment, perhaps nothing is taking up as much oxygen as their policies around political ads. But it’s difficult to discuss this topic without a detailed understanding of what the platforms are actually doing. That’s where Bridget and Daniel come in. They’ve worked to provide a comprehensive account of the different policies in this space, how those policies interact, and how they’re changing—or not—the way we interact with politics. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare ’s Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Lisa Kaplan and Sophie Lawton of Alethea Group, an organization that works to detect and mitigate disinformation on social media. Lisa recently published a piece on Lawfare about a massive network of companies run by TheSoul Publishing—founded in Russia by a company called AdMe. The companies publish bizarre craft videos on Youtube and Facebook, along with a handful of videos about history and politics with an overtly pro-Russian slant. So what is actually going on here? They talked about what red flags Lisa and Sophie look for in hunting down disinformation, their experiences tackling disinformation while working for Senator Angus King’s reelection campaign in 2018, and how political campaigns need to tackle online influence efforts in 2020. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
This week on Lawfare ’s Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Brendan Nyhan, a professor of political science at Dartmouth University. We talk a lot about the crisis of falsehoods circulating online, but Nyhan’s work focuses on empirical research on what the effects of disinformation and misinformation actually are. And he’s found that those effects might play less of a role in political discourse than you’d think—or at least not quite in the way you might think. They talked about the fake news about fake news and the echo chamber about echo chambers. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
In this episode of Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Kate Klonick spoke with Alex Stamos, the director of the Stanford Internet Observatory. Prior to joining Stanford, Alex served as the chief security officer at Facebook, and before that, as the chief information security officer at Yahoo. They talked about Alex's experience at Facebook handling 2016 election interference, as well as his work on cybersecurity, disinformation, and end-to-end encryption. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
For the past several months, Australia has been struck by massive bushfires like nothing seen before in recent memory. As the country has grappled with the spread of these unprecedented blazes, it’s also grappled with the spread of falsehoods about what caused them. This week on Lawfare 's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Elise Thomas, a journalist and researcher at the Australia Strategic Policy Institute’s International Cyber Policy Center. Elise has been tracking misinformation and disinformation around the blazes—from the suggestion by the right-wing Australian press that arson, not climate change, is to blame for the fires, to online conspiracy theories imported in from the United States. They talked not only about the fires, but also about the global nature of the fight against mis- and disinformation online and why we need to be cautious about focusing too much on bots in waging that fight. Elise was calling in from Canberra, and unfortunately we had some audio glitches, but it's too great a conversation to miss. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
For this episode of Lawfare ’s Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Renee DiResta, the technical research manager at the Stanford Internet Observatory. Renee has done fascinating work on how technology platforms and algorithms interact with false and misleading narratives, ranging from misleading information on health issues to propaganda pushed by the Islamic State and the Russian government. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
On this episode of the Arbiters of Truth series, Evelyn Douek and Quinta Jurecic spoke with law professors Bobby Chesney and Danielle Citron about deep fakes—that is, artificial audio and video that can be used to depict a person doing or saying something that they never did or said. They talked about the paper that Bobby and Danielle wrote in 2018 about how deep fakes pose a looming challenge for privacy, democracy, and national security. And with recently circulated, doctored video of Speaker of the House Nancy Pelosi and presidential candidate Joe Biden, they talked about how the issue hasn't gone away, as well as the distinction between deep fakes and other less sophisticated forms of editing. Hosted on Acast. See acast.com/privacy for more information.
Feb 3, 2022
We talk a lot on this show about the responsibility of major tech platforms when it comes to content moderation. But what about problems the platforms can’t—or won’t—fix? Tracy Chou’s solution involves going around platforms entirely and creating tools that give power back to users to control their own experience. She’s the engineer behind Block Party , an app that allows Twitter users to protect themselves against online harassment and abuse. It’s a fine-tuned solution to a problem that a lot of Twitter users struggle with, especially women and particularly women of color. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Tracy about her work developing Block Party and how the persistent lack of diversity in Silicon Valley contributes to an environment where users have little protection against harassment. They also talked about what it’s like working with the platforms that Block Party and other apps like it are seeking to improve. And they discussed what content moderation problems these kinds of user-driven tools might help solve–and which they won’t. Hosted on Acast. See acast.com/privacy for more information.
Jan 27, 2022
As we’ve discussed on the show, online advertisements are the shifting, unstable sand on which the contemporary internet is built. And one of the many, many ways in which the online ad ecosystem is confusing and opaque involves how advertisers can find their ads popping up alongside content they’d rather not be associated with—and, all too often, not having any idea how that happened. This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke to Nandini Jammi and Claire Atkin of the Check My Ads Institute . Their goal is to serve as a watchdog for the ad industry, and they’ve just started a campaign to let companies know—and call them out—when their ads are showing up next to content published by far-right figures like Steve Bannon who supported the Jan. 6 insurrection. So what is it about the ads industry that makes things so opaque, even for the companies paying to have their ads appear online? What techniques do Claire and Nandini use to trace ad distribution? And how do advertisers usually respond when Check My Ads alerts them that they’re funding “brand unsafe” content? Hosted on Acast. See acast.com/privacy for more information.
Jan 20, 2022
In December 2020, ten state attorneys general sued Google , alleging that the tech giant had created an illegal monopoly over online advertising. The lawsuit is ongoing, and just this January, new allegations in the states’ complaint were freshly unsealed: the states have accused Google of tinkering with its ad auctions to mislead publishers and advertisers and expand its own power in the marketplace. (Google told the Wall Street Journal that the complaint was “full of inaccuracies and lacks legal merit.”) The complaint touches on a crucial debate about the online advertising industry: does it, well, work? This week on Arbiters of Truth, our series on the online information ecosystem, Evelyn Douek and Quinta Jurecic spoke with Tim Hwang, Substack’s general counsel and the author of the book “ Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet .” Tim argues that online advertising, which underpins the structure of the internet as we know it today, is a house of cards—that advertisers aren’t nearly as good as they claim at monetizing our attention, even as they keep marketing it anyway. So how worried should we be about this structure collapsing? If ads can’t convince us to buy things, what does that mean about our understanding of the internet? And what other possibilities are there for designing a better online space? Hosted on Acast. See acast.com/privacy for more information.