Player FM - Internet Radio Done Right
Checked 5d ago
Hozzáadva tizenhat hete
A tartalmat a Alix Dunn biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Alix Dunn vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!
Lépjen offline állapotba az Player FM alkalmazással!
Podcastok, amelyeket érdemes meghallgatni
SZPONZORÁLT
On this episode of Advances in Care , host Erin Welsh and Dr. Craig Smith, Chair of the Department of Surgery and Surgeon-in-Chief at NewYork-Presbyterian and Columbia discuss the highlights of Dr. Smith’s 40+ year career as a cardiac surgeon and how the culture of Columbia has been a catalyst for innovation in cardiac care. Dr. Smith describes the excitement of helping to pioneer the institution’s heart transplant program in the 1980s, when it was just one of only three hospitals in the country practicing heart transplantation. Dr. Smith also explains how a unique collaboration with Columbia’s cardiology team led to the first of several groundbreaking trials, called PARTNER (Placement of AoRTic TraNscatheteR Valve), which paved the way for a monumental treatment for aortic stenosis — the most common heart valve disease that is lethal if left untreated. During the trial, Dr. Smith worked closely with Dr. Martin B. Leon, Professor of Medicine at Columbia University Irving Medical Center and Chief Innovation Officer and the Director of the Cardiovascular Data Science Center for the Division of Cardiology. Their findings elevated TAVR, or transcatheter aortic valve replacement, to eventually become the gold-standard for aortic stenosis patients at all levels of illness severity and surgical risk. Today, an experienced team of specialists at Columbia treat TAVR patients with a combination of advancements including advanced replacement valve materials, three-dimensional and ECG imaging, and a personalized approach to cardiac care. Finally, Dr. Smith shares his thoughts on new frontiers of cardiac surgery, like the challenge of repairing the mitral and tricuspid valves, and the promising application of robotic surgery for complex, high-risk operations. He reflects on life after he retires from operating, and shares his observations of how NewYork-Presbyterian and Columbia have evolved in the decades since he began his residency. For more information visit nyp.org/Advances…
Computer Says Maybe
Mind megjelölése nem lejátszottként
Manage series 3612557
A tartalmat a Alix Dunn biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Alix Dunn vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
…
continue reading
38 epizódok
Mind megjelölése nem lejátszottként
Manage series 3612557
A tartalmat a Alix Dunn biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Alix Dunn vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
…
continue reading
38 epizódok
Όλα τα επεισόδια
×Are you tired of hearing the phrase ‘AI Safety’ and rolling your eyes? Do you also sometimes think… okay but what is technically wrong with advocating for ‘safer’ AI systems? Do you also wish we could have more nuanced conversations about China and AI? In this episode Shazeda Ahmed goes deep on the field of AI Safety, explaining that it is a community that is propped up by its own spiral of reproduced urgency; and that so much of it is rooted in American anti-China sentiment. Read: the fear that the big scary authoritarian country will build AGI before the US does, and destroy us all. Further reading & resources: Emotional Entanglement — Article 19 Bodily Harms by Xiaowei Wang and Shazeda Ahmed for Access Now Field-building and the epistemic culture of AI safety — First Monday Made in China journal Pause AI **Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!** Shazeda Ahmed is a Chancellor’s Postdoctoral fellow at the University of California, Los Angeles. Shazeda completed her Ph.D. at UC Berkeley’s School of Information in 2022, and was previously a postdoctoral research fellow at Princeton University’s Center for Information Technology Policy. She has been a research fellow at Upturn, the Mercator Institute for China Studies, the University of Toronto's Citizen Lab, Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and NYU's AI Now Institute. Shazeda’s research investigates relationships between the state, the firm, and society in the US-China geopolitical rivalry over AI, with implications for information technology policy and human rights. Her work draws from science and technology studies, ranging from her dissertation on the state-firm co-production of China’s social credit system, to her research on the epistemic culture of the emerging field of AI safety.…
Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are. ICYMI : this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics: Astha Kapoor , Co-Founder for the Aapti Institute Amba Kak , Executive Director of the AI Now Institute Abeba Birhane , Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL) Nabiha Syed , Executive Director of Mozilla If audio is not enough for you, go ahead and watch the show on YouTube **Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!** *Astha Kapoor is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.* *Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.* *Dr. Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.* *Nabiha Syed is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*…
C
Computer Says Maybe

1 Defying Datafication w/ Dr Abeba Birhane (PLUS: Paris AI Action Summit) 1:03:46
1:03:46
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt1:03:46
The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you. We are streaming next week’s podcast LIVE from Paris on YouTube — register here 🎙️ On Tuesday, February 11th , at 6:30pm Paris time / 12:30pm EST , we’ll be recording our first-ever LIVE podcast episode . After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our Paris Post-Mortem — and we’re live-streaming the whole conversation. We’ll hear from: Astha Kapoor , Co-Founder for the Aapti Institute Amba Kak , Executive Director of the AI Now Institute Abeba Birhane , Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL) Nabiha Syed , Executive Director of Mozilla This is our first-ever live-streamed podcast , and we’d love a great community turnout. Join the stream on Tuesday and share it with anyone else who wants the hot of the press review of what happens in Paris. And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with Martin Tisne who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve. We also hear from Michelle Thorne , of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire. Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity. Further Reading & Resources: **AI auditing: The Broken Bus on the Road to AI Accountability ** by Abeba Birhane , Ryan Steed , Victor Ojewale , Briana Vecchione , Inioluwa Deborah Raji AI Accountability Lab Press release outlining the Lab’s launch last year — Trinity College The Artificial Intelligence Action Summit Within Bounds: Limiting AI’s Environmental Impact — led by Michelle Thorne from the Green Web Foundation Our Youtube Channel Dr Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL) . Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland. Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles. Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the Green Web Foundation . She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes Branch , an online magazine written by and for people who dream about a sustainable internet, which received the Ars Electronica Award for Digital Humanities in 2021.…
This week Alix continues her conversation with Hanna McCloskey and Rubie Clarke from Fearless Futures and we take a whistle-stop tour of the past 5 years. We start in 2020 with the disingenuous but huge embrace of DEI work by tech companies, to 2025 when those same companies are part of massive movements actively campaigning against it. The pair share what it was like running a DEI consultancy in the months and years following the murder of George Floyd — when DEI was suddenly on the agenda for a lot organisations. The performative and ineffective methods that DEI is famous for (endless canape receptions!) has also given the inevitable backlash easy pickings for mockery and vilification. The news is happening so fast, but these DEI episodes can hopefully help listeners better understand the backlash, not just to DEI, but to any attempts to correct systemic inequity in society. Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in! Further reading & resources: Fearless Futures DEI Disrupted: The Blueprint for DEI Worth Doing Combahee River Collective Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation. Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder & CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.…
DEI is a nebulous field — if you’re not in it, it can be hard to know which tactics and methods are reasonable and effective… and which are a total waste of time. Or worse: which are actively harmful. In this two-parter Alix is joined by Hanna McCloskey and Rubie Clarke from Fearless Futures. In this episode they share what DEI is and crucially, what it isn’t. Listen to understand why unconscious bias training is a waste of time, and what meaningful anti-oppression work actually looks like — especially when attempting to embed these principles into digital products that are deployed globally. **Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!** Further reading & resources: Fearless Futures DEI Disrupted: The Blueprint for DEI Worth Doing Combahee River Collective Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation. Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder & CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.…
We have a special episode for you this week: we brought in Hanna Mccloskey and Rubie Clarke from Fearless Futures to talk about the recent announcement from Mark Zuckerberg which signalled, very strongly, that he doesn’t care about marginalised groups on his platforms — or within the company itself. We hear from Rubie and Hanna in the first half of the episode — and they will be back with us over the next couple of weeks for a two-parter on DEI! The rest of the episode will feature Alex Kotran discussing the future of Education. What does the term ‘AI literacy’ invoke for you? A proficiency in AI tooling? For Alex Kotran, founder of The AI Education Project, it’s about preparing students to enter a rapidly changing workforce. It’s not about just learning how to use AI, but understanding how to build durable skills around it, and get on a career path that won’t disappear in five years. Alex has some great perspectives on how AI tools will significantly narrow career paths for young people. This is an urgent issue that spans beyond basic AI literacy. It's about preparing students for a workforce that might look very different in five years to what it does today, and thinking holistically about how issues of tech procurement and efficiency intersect with times of economic downturn, such as a recession. Further Reading: The AI Education Project The AIEDU’s AI Readiness Framework Alex Kotran, CEO of The AI Education Project (aiEDU), has nearly a decade of AI expertise and more than a decade of political experience, as a community organizer. He founded aiEDU in 2019 after he discovered that the Akron Public Schools, where his mom has taught for 30+ years, did not offer courses in AI use. Previously, as Director of AI Ethics at H5, Alex partnered with NYU Law School and the National Judicial College to create a judicial training program that is now used around the world. He also established H5's first CSR function, incubating nonprofits like The Future Society, a leading AI governance institute.…
Welcome back! Let us know what you think of the show and what you want to see more of in 2025 by writing in here , or rambling into a microphone here . In this episode Alix is joined by Tawana Petty, who shares her experiences coming up as a political community activist in Detroit. Tawana studied the history of radical black movements under Grace Lee Boggs, and has taken these learnings into her work today. Listen to learn about how places like Detroit are used as testing grounds for new ‘innovations’ — especially within marginalised neighbourhoods. Tawana explains in detail how surveillance and safety are often mistakenly conflated, and how we have to work to unlearn this conflation. Further reading: Our Data Bodies project: https://www.odbproject.org/ James and Grace Lee Boggs Center: https://www.boggscenter.org/ The Detroit Community and Technology Project: https://detroitcommunitytech.org/ who ran the digital stewards program Detroit Digital Justice Coalition: https://alliedmedia.org/projects/detroit-digital-justice-coalition We The People of Detroit: https://www.wethepeopleofdetroit.com/ Tawana Petty is a mother, social justice organizer, poet, author, and facilitator. She is the founding Executive Director of Petty Propolis, Inc., an artist incubator which teaches poetry, policy literacy and advocacy, and interrogates negative pervasive narratives, in pursuit of racial and environmental justice. Petty is a 2023-2025 Just Tech Fellow with the Social Science Research Council, a 2024 Rockwood National LIO Alum, and she currently serves on the CS (computer science) for Detroit Steering Committee. In 2021, Petty was named one of 100 Brilliant Women in AI Ethics. In 2023, she was honored with the AI Policy Leader in Civil Society Award by the Center for AI and Digital Policy, the Ava Jo Silent Shero Award by the Michigan Roundtable for Diversity and Inclusion, and with a Racial Justice Leadership Award by the Detroit People's Platform. In 2024, Petty was listed on Business Insider’s AI Power List for Policy and Ethics.…
C
Computer Says Maybe

We’re wrapped for the year, and will be back on the 10th of Jan. In the meantime, listen to Alix, Prathm, and Georgia discuss their biggest learnings from the pod this year from some of their favourite episodes. **We want to hear from YOU about the podcast — what do you want to hear more of in 2025? Share your ideas with us here: https://tally.so/r/3E860B ** Or if you’d rather ramble into a microphone (just like we do…) use this link instead! We pull out clips from the following episodes: The Age of Noise w/ Eryk Salvaggio The Happy Few: Open Source AI pt1 Big Dirty Data Centres w/ Boxi Wu and Jenna Ruddock US Election Special w/ Spencer Overton Chasing Away Sidewalk Labs w/ Bianca Wylie The Human in the Loop The Stories we Tell Ourselves About AI Further reading: Learn more about what ex TikTok moderator Mojez has been up to this year via this BBC TikTok…
Google has finally been judged to be a monopoly by a federal court — while this was strikingly obvious already, what does this judgement mean? Is this too little too late? This week Alix and Prathm were joined by Michelle Meagher, an antitrust lawyer who shared a brief history of how antitrust started as a tool for governments to stop the consolidation of corporate power, and over time has morphed to focus on issues of competition and consumer protection — which has allowed monopolies to thrive. Michelle discusses the details and her thinking on the ongoing cases against Google, and more generally on how monopolies are basically like a big octopus arm-wrestling itself. Further reading: US Said to Consider a Breakup of Google to Address Search Monopoly — NY Times Google’s second antitrust suit brought by US begins, over online ads — Guardian Big Tech on Trial — Matt Stoller How the EU’s DMA is changing Big Tech — The Verge UK set to clear Microsoft’s deal to buy Call of Duty maker Activision Blizzard — Guardian Sign up to the Computer Says Maybe newsletter to get invites to our events and receive other juicy resources straight to your inbox Michelle is a competition lawyer and co-founder of the Balanced Economy Project, Europe’s first anti-monopoly organisation. She is author of Competition is Killing Us: How Big Business is Harming Our Society and Planet - and What to Do About It (Penguin, 2020), a Financial Times Best Economics Book of the Year. She is a Senior Policy Fellow at the University College London Centre for Law, Economics and Society. She is a Senior Fellow working on Monopoly and Corporate Governance at the Centre for Research on Multinational Corporations (SOMO).…
What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan? This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise. So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that? Further reading: Visit Eryk’s Website Cybernetic Forests — Eryk’s newsletter on tech and culture Our upcoming event: Insight Session: The politics, power, and responsibility of AI procurement with Bianca Wylie Our newsletter , which shares invites to events like the above, and other interesting bits Eryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.…
In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk. Alix is joined again by Mark Surman , and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy. We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks. Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development. More about our guests: Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon. Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI. Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM. Further reading & resources Polis — a real-time participation platform Recursive Public by vTaiwan Noor — the first LLM trained on the Arabic language Falcon Foundation Buy AI Snake Oil by Sayash Kapoor and Arvind Narayanan…
In the context of AI, what do we mean when we say ‘open source’? An AI model is not something you can straightforwardly open up like a piece of software; there are huge technical and social considerations to be made. Is it risky to open-source highly capable foundation models? What guardrails do we need to think about when it comes to the proliferation of harmful content? And, can you really call it ‘open’ if the barrier for accessing compute is so high? Is model alignment really the only thing we have to protect us? In this two-parter, Alix is joined by Mozilla president Mark Surman to discuss the benefits and drawbacks of open and closed models. Our guests are Alondra Nelson, Merouane Debbah, Audrey Tang, and Sayash Kapoor. Listen to learn about the early years of the free software movement, the ecosystem lock-in of the closed-source environment, and what kinds of things are possible with a more open approach to AI. Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development. More about our guests: Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon. Alondra Nelson is s scholar of the intersections of science, technology, policy, and society, and the Harold F. Linder Professor at the Institute for Advanced Study , an independent research center in Princeton, New Jersey. Dr. Nelson was formerly deputy assistant to President Joe Biden and acting director of the White House Office of Science and Technology Policy (OSTP). In this role, she spearheaded the development of the Blueprint for an AI Bill of Rights , and was the first African American and first woman of color to lead US science and technology policy. Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI. Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM. Further reading & resources Polis — a real-time participation platform Recursive Public by vTaiwan Noor — the first LLM trained on the Arabic language Falcon Foundation Buy AI Snake Oil by Sayash Kapoor and Arvind Narayanan…
This week Alix was joined by Kevin De Liban, who just launched Techntonic Justice, an organisation designed to support and fight for those harmed by AI systems. In this episode Kevin describes his experiences litigating on behalf of people in Arkansas who found their in-home care hours cut aggressively by an algorithm administered by the state. This is a story about taking care away from individuals in the name of ‘efficiency’, and the particular levers for justice that Kevin and his team managed to take advantage of to eventually ban the use of this algorithm in Arkansas. CW: This episode contains descriptions of people being denied care and left in undignified situations at around 08.17- 08.40 and 27.12-28.07 Further reading & resources: Techtonic Justice Kevin De Liban is the founder of Techtonic Justice, and the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers' rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas's use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly.…
C
Computer Says Maybe

This week we’re wallowing in post-election catharsis: Alix and Prathm process the result together, and discuss the implications this administration has for technology politics. How much of a role will people like Elon Musk and Peter Thiel play during Trump’s presidency? What kind of tactics should the left adopt going forward to stop this from happening again? And what does this mean for the technology politics community? This episode was recorded on Wednesday the 6th of November; we don’t have all the answers but we know we want to move forward and have never been more motivated to make change happen.…
For this pre-election special, Prathm spoke with law professor Spencer Overton about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about… Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour. Further reading: Artificial Intelligence for Electoral Management | International IDEA Overcoming Racial Harms to Democracy from Artificial Intelligence by Spencer Overton :: SSRN AI’s impact on elections is being overblown | MIT Technology Review Effects of Shelby County v. Holder on the Voting Rights Act | Brennan Center for Justice Spencer Overton is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.…
Üdvözlünk a Player FM-nél!
A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.