Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from ficti...
Episode 46: AGI Funny Business (Model), with Brian Merchant, December 2 2024
Once upon a time, artificial general intelligence was the only business plan OpenAI seemed to have. Tech journalist Brian Merchant joins Emily and Alex for a time warp to the beginning of the current wave of AI hype, nearly a decade ago. And it sure seemed like Elon Musk, Sam Altman, and company were luring investor dollars to their newly-formed venture solely on the hand-wavy promise that someday, LLMs themselves would figure out how to turn a profit.Brian Merchant is an author, journalist in residence at the AI Now Institute, and co-host of the tech news podcast System Crash.References:Elon Musk and partners form nonprofit to stop AI from ruining the worldHow Elon Musk and Y Combinator Plan to Stop Computers From Taking OverElon Musk's Billion-Dollar AI Plan Is About Far More Than Saving the WorldBrian’s recent report on the business model of AGI, for the AI Now Institute: AI Generated Business: The rise of AGI and the rush to find a working revenue modelPreviously on MAIHT3K: Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld)Fresh AI Hell:OpenAI explores advertising as it steps up revenue driveIf an AI company ran Campbell's Soup with the same practices they use to handle dataHumans are the new 'luxury item'Itching to write a book? AI publisher Spines wants to make a dealA company pitched Emily her own 'verified avatar'Don't upload your medical images to chatbotsA look at a pilot program in Georgia that uses 'jailbots' to track inmatesYou can check out future livestreams on Twitch.Our book, 'The AI Con,' comes out in May! Pre-order your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
--------
1:02:35
Episode 45: Billionaires, Influencers, and Ed Tech (feat. Adrienne Williams), November 18 2024
From Bill Gates to Mark Zuckerberg, billionaires with no education expertise keep using their big names and big dollars to hype LLMs for classrooms. Promising ‘comprehensive AI tutors', or just ‘educator-informed’ tools to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems -- under-supported school systems -- but then directing attention and resources to their favorite toys. Former educator and DAIR research fellow Adrienne Williams joins to explain the problems this tech-solutionist redirection fails to solve, and the new ones it creates.Adrienne Williams started organizing in 2018 while working as a junior high teacher for a tech owned charter school. She expanded her organizing in 2020 after her work as an Amazon delivery driver, where many of the same issues she saw in charter schools were also in evidence. Adrienne is a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with the MacArthur Foundation, as well as a Research Fellow at both (DAIR) and Just Tech.References:Funding Helps Teachers Build AI ToolsSal Khan's 2023 Ted Talk: AI in the classroom can transform educationBill Gates: My trip to the frontier of AI educationBackground: Cory Booker Hates Public SchoolsBackground: Cory Booker's track record on educationBook: Access is Capture: How Edtech Reproduces Racial InequalityBook: Disruptive Fixation: School Reform and the Pitfalls of Techno-IdealismPreviously on MAIHT3K: Episode 26, Universities Anxiously Buy Into the Hype (feat. Chris Gilliard)Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp)Fresh AI Hell:"Streamlining" teachingGoogle, Microsoft and Perplexity are promoting scientific racism in 'AI overviews''Whisper' medical transcription tool used in hospitals is making things upX's AI bot can't tell the difference between a bad game and vandalismPrompting is not a substitute for probability measurements in large language modelsYet another 'priestbot'Self-driving wheelchairs at Seattle-Tacoma International AirpotYou can check out future livestreams on Twitch.Our book, 'The AI Con,' comes out in May! Pre-order your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
--------
1:00:33
Episode 44: OpenAI's Ridiculous 'Reasoning', October 28 2024
The company behind ChatGPT is back with bombastic claim that their new o1 model is capable of so-called "complex reasoning." Ever-faithful, Alex and Emily tear it apart. Plus the flaws in a tech publication's new 'AI hype index,' and some palette-cleansing new regulation against data-scraping worker surveillance.References:OpenAI: Learning to reason with LLMsHow reasoning worksGPQA, a 'graduate-level' Q&A benchmark systemFresh AI Hell:MIT Technology Review's AI 'AI hype index'CFPB Takes Action to Curb Unchecked Worker SurveillanceYou can check out future livestreams on Twitch.Our book, 'The AI Con,' comes out in May! Pre-order your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
--------
1:00:11
Episode 43: AI Companies Gamble with Everyone's Planet (feat. Paris Marx), October 21 2024
Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignoring a genuine climate cost and imperiling the clean energy transition in the US.Paris Marx is a tech journalist and host of the podcast Tech Won’t Save Us. He also recently launched a 4-part series, Data Vampires, (which features Alex) about the promises and pitfalls of data centers like the ones AI boosters rely on.References:Eric Schmidt says AI more important than climate goalsMicrosoft's sustainability reportSam Altman's “The Intelligence Age” promises AI will fix the climate crisisPreviously on MAIHT3K: Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023Fresh AI Hell:Rosetta to linguists: "Embrace AI or risk extinction" of endangered languagesA talking collar that you can use to pretend to talk with your petsGoogle offers synthetic podcasts through NotebookLMAn AI 'artist' claims he's losing millions of dolalrs from people stealing his workUniversity hiring English professor to teach...prompt engineeringYou can check out future livestreams on Twitch.Our book, 'The AI Con,' comes out in May! Pre-order your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
--------
1:01:22
Episode 42: Stop Trying to Make 'AI Scientist' Happen, September 30 2024
Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you time on surveys.Alex and Emily explain why so-called “fully automated, open-ended scientific discovery” can’t live up to the grandiose promises of tech companies. Plus, an update on their forthcoming book!References:Sakana.AI keeps trying to make 'AI Scientist' happenThe AI Scientist: Towards Fully Automated Open-Ended Scientific DiscoveryCan LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP ResearchersHow should the advent of large language models affect the practice of science?Relevant research ethics policies:ACL Policy on Publication EthicsCommittee On Public Ethics (COPE)The Vancouver Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly WorkFresh AI Hell:Should journals allow LLMs as co-authors?Business Insider "asks ChatGPT"Otter.ai sends transcript of private after-meeting discussion to everyone"Could AI End Grief?"AI generated crime scene footage"The first college of nursing to offer an MSN in AI"FTC cracks down on "AI" claimsYou can check out future livestreams on Twitch.Our book, 'The AI Con,' comes out in May! Pre-order your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.