Artwork

Contenu fourni par re:verb, Calvin Pollak, and Alex Helberg. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par re:verb, Calvin Pollak, and Alex Helberg ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

E82: The Rhetoric of AI Hype (w/ Dr. Emily M. Bender)

53:02
 
Partager
 

Manage episode 372490045 series 3069188
Contenu fourni par re:verb, Calvin Pollak, and Alex Helberg. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par re:verb, Calvin Pollak, and Alex Helberg ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Are you a writing instructor or student who’s prepared to turn over all present and future communication practices to the magic of ChatGPT? Not so fast! On today’s show, we are joined by Dr. Emily M. Bender, Professor in the Department of Linguistics at the University of Washington and a pre-eminent academic critic of so-called “generative AI” technologies. Dr. Bender’s expertise involves not only how these technologies work computationally, but also how language is used in popular media to hype, normalize, and even obfuscate AI and its potential to affect our lives.

Dr. Bender’s most well-known scholarly work related to this topic is a co-authored conference paper from 2021 entitled, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In our conversation, Emily explains why she and her co-authors chose the “stochastic parrot” metaphor – how this helps us to understand large language models and other related technologies more accurately than many competing metaphors. We go on to discuss several actual high-stakes, significant issues related to these technologies, before Dr. Bender provides a helpful index of some the most troublesome ways they are talked about in the media: synthetic text “gotcha”s, infancy metaphors, linear models of progress, inevitability framings, and many other troublesome tropes.

We conclude with a close reading of a recent piece in the Chronicle of Higher Education about using synthetic text generators in writing classrooms: “Why I’m Excited About Chat GPT” by Jenny Young. Young’s article exemplifies many of the tropes Emily discussed earlier, as well as capturing lots of strange prevailing ideas about writing pedagogy, genre, and rhetoric in general. We hope that you enjoy this podcast tour through the world of AI hype media, and we ask that you please remain non-synthetic ‘til next time – no shade to parrots! 🦜

Check out the podcast Mystery AI Hype Theater 3000, hosted by Dr. Emily M. Bender and Alex Hanna:

Audio Podcast

Video Podcast

Works & Concepts Cited in this episode:

Baria, A. T., & Cross, K. (2021). The brain is a computer is a brain: neuroscience's internal debate and the social significance of the Computational Metaphor. arXiv preprint arXiv:2107.14042.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Bender, E. M., & Koller, A. (2020, July). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5185-5198).

Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., & Bao, M. (2022, June). The values encoded in machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 173-184).

Birhane, A. (2021). Algorithmic injustice: a relational ethics approach. Patterns, 2(2).

Doyle-Burke, D. & Smith, J. (Hosts).(2020, 8 Jul.). Episode 17: Science Fiction, Science Fact, and AI Consciousness with Beth Singler. [Audio Podcast Episode]. The Radical AI Podcast. Radical AI.

Dubal, V. (2023). On algorithmic wage discrimination. Washington Center for Equitable Growth Working Papers Series.

Fahnestock, J. (1986). Accommodating science: The rhetorical life of scientific facts. Written communication, 3(3), 275-296.

Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences.

Li, H. (2022). Language models: past, present, and future. Communications of the ACM, 65(7), 56-63. [Provides overview and context of Claude Shannon and Andrey Markov’s work on language modeling].

McDermott, D. (1976). Artificial intelligence meets natural stupidity. Acm Sigart Bulletin, (57), 4-9. [on the concept of A.I. as a “wishful mnemonic”]

Mitchell, M. (2021). Why AI is harder than we think. arXiv preprint arXiv:2104.12871. [More on A.I. as “wishful mnemonic”]

Quintarelli, S. (2019, 4 Nov.). Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI). Quinta’s weblog.

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243.

Torres, É.P. (2023, 15 Jun.). The Acronym Behind Our Wildest AI Dreams and Nightmares. TruthDig. [Émile Torres on the background behind the TESCREAL ideologies].

An accessible transcript of this episode is available here

  continue reading

97 episodes

Artwork
iconPartager
 
Manage episode 372490045 series 3069188
Contenu fourni par re:verb, Calvin Pollak, and Alex Helberg. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par re:verb, Calvin Pollak, and Alex Helberg ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Are you a writing instructor or student who’s prepared to turn over all present and future communication practices to the magic of ChatGPT? Not so fast! On today’s show, we are joined by Dr. Emily M. Bender, Professor in the Department of Linguistics at the University of Washington and a pre-eminent academic critic of so-called “generative AI” technologies. Dr. Bender’s expertise involves not only how these technologies work computationally, but also how language is used in popular media to hype, normalize, and even obfuscate AI and its potential to affect our lives.

Dr. Bender’s most well-known scholarly work related to this topic is a co-authored conference paper from 2021 entitled, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In our conversation, Emily explains why she and her co-authors chose the “stochastic parrot” metaphor – how this helps us to understand large language models and other related technologies more accurately than many competing metaphors. We go on to discuss several actual high-stakes, significant issues related to these technologies, before Dr. Bender provides a helpful index of some the most troublesome ways they are talked about in the media: synthetic text “gotcha”s, infancy metaphors, linear models of progress, inevitability framings, and many other troublesome tropes.

We conclude with a close reading of a recent piece in the Chronicle of Higher Education about using synthetic text generators in writing classrooms: “Why I’m Excited About Chat GPT” by Jenny Young. Young’s article exemplifies many of the tropes Emily discussed earlier, as well as capturing lots of strange prevailing ideas about writing pedagogy, genre, and rhetoric in general. We hope that you enjoy this podcast tour through the world of AI hype media, and we ask that you please remain non-synthetic ‘til next time – no shade to parrots! 🦜

Check out the podcast Mystery AI Hype Theater 3000, hosted by Dr. Emily M. Bender and Alex Hanna:

Audio Podcast

Video Podcast

Works & Concepts Cited in this episode:

Baria, A. T., & Cross, K. (2021). The brain is a computer is a brain: neuroscience's internal debate and the social significance of the Computational Metaphor. arXiv preprint arXiv:2107.14042.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Bender, E. M., & Koller, A. (2020, July). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5185-5198).

Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., & Bao, M. (2022, June). The values encoded in machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 173-184).

Birhane, A. (2021). Algorithmic injustice: a relational ethics approach. Patterns, 2(2).

Doyle-Burke, D. & Smith, J. (Hosts).(2020, 8 Jul.). Episode 17: Science Fiction, Science Fact, and AI Consciousness with Beth Singler. [Audio Podcast Episode]. The Radical AI Podcast. Radical AI.

Dubal, V. (2023). On algorithmic wage discrimination. Washington Center for Equitable Growth Working Papers Series.

Fahnestock, J. (1986). Accommodating science: The rhetorical life of scientific facts. Written communication, 3(3), 275-296.

Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences.

Li, H. (2022). Language models: past, present, and future. Communications of the ACM, 65(7), 56-63. [Provides overview and context of Claude Shannon and Andrey Markov’s work on language modeling].

McDermott, D. (1976). Artificial intelligence meets natural stupidity. Acm Sigart Bulletin, (57), 4-9. [on the concept of A.I. as a “wishful mnemonic”]

Mitchell, M. (2021). Why AI is harder than we think. arXiv preprint arXiv:2104.12871. [More on A.I. as “wishful mnemonic”]

Quintarelli, S. (2019, 4 Nov.). Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI). Quinta’s weblog.

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243.

Torres, É.P. (2023, 15 Jun.). The Acronym Behind Our Wildest AI Dreams and Nightmares. TruthDig. [Émile Torres on the background behind the TESCREAL ideologies].

An accessible transcript of this episode is available here

  continue reading

97 episodes

Tous les épisodes

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide