Artwork

Contenu fourni par re:verb, Calvin Pollak, and Alex Helberg. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par re:verb, Calvin Pollak, and Alex Helberg ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

E75: A.I. Writing and Academic Integrity

32:08
 
Partager
 

Manage episode 346815838 series 2460300
Contenu fourni par re:verb, Calvin Pollak, and Alex Helberg. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par re:verb, Calvin Pollak, and Alex Helberg ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

“A.I. Is Making It Easier Than Ever for Students to Cheat,” proclaims Slate. The Chronicle of Higher Education asks, rhetorically: “Will Artificial Intelligence Kill College Writing?” And the New York Times warns: “A.I. is Mastering Language. Should We Trust What It Says?” Judging by media coverage of A.I. writing algorithms, you would think they’re on the verge of becoming the easiest and most effective cheating option for writing students. But is the situation really that simple?

To help us answer this question, we’re joined on the show this week by Dr. S. Scott Graham, associate professor in the Department of Rhetoric & Writing at the University of Texas at Austin. Scott uses artificial intelligence and machine learning to study communication in bioscience and health policy, with special attention to bioethics, conflicts of interest, and health AI. Most recently, he published an Inside Higher Ed piece about A.I. and college student writing entitled “AI-Generated Essays Are Nothing to Worry About.”

In our wide-ranging conversation, Scott explains why he describes himself as a “A.I. cautiously optimistic” despite the many well-documented problems with A.I. in areas such as policing and healthcare. He also helps us understand the limits of A.I. as a cheating tool as well as its affordances for teaching genre in college writing. We go on to discuss some potentially concerning non-academic applications of algorithmic writing, before concluding that (as with A.I. policing and healthcare), the problems we might attribute to A.I. tend to be better understood as problems endemic to our structurally unequal society. Scott also points out that writing algorithms are certain to create conflicts between different people’s (and disciplines’) ideas about language and “authentic” subjectivity, and that rhetorical scholars have an important role to play in helping people outside our discipline understand why we tend to express skepticism at notions of “authenticity” in writing.

Works and Concepts Referenced in this Episode:

Scott’s great article in Inside Higher Ed: “AI-Generated Essays Are Nothing to Worry About”

Graham, S. S. (2015). The politics of pain medicine. In The Politics of Pain Medicine. University of Chicago Press.

Graham, S. S. (2020). Where's the Rhetoric? Imagining a Unified Field. The Ohio State University Press.

Graham, S. S. (2022). The Doctor and the Algorithm: Promise, Peril, and the Future of Health AI.

  continue reading

97 episodes

Artwork
iconPartager
 
Manage episode 346815838 series 2460300
Contenu fourni par re:verb, Calvin Pollak, and Alex Helberg. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par re:verb, Calvin Pollak, and Alex Helberg ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

“A.I. Is Making It Easier Than Ever for Students to Cheat,” proclaims Slate. The Chronicle of Higher Education asks, rhetorically: “Will Artificial Intelligence Kill College Writing?” And the New York Times warns: “A.I. is Mastering Language. Should We Trust What It Says?” Judging by media coverage of A.I. writing algorithms, you would think they’re on the verge of becoming the easiest and most effective cheating option for writing students. But is the situation really that simple?

To help us answer this question, we’re joined on the show this week by Dr. S. Scott Graham, associate professor in the Department of Rhetoric & Writing at the University of Texas at Austin. Scott uses artificial intelligence and machine learning to study communication in bioscience and health policy, with special attention to bioethics, conflicts of interest, and health AI. Most recently, he published an Inside Higher Ed piece about A.I. and college student writing entitled “AI-Generated Essays Are Nothing to Worry About.”

In our wide-ranging conversation, Scott explains why he describes himself as a “A.I. cautiously optimistic” despite the many well-documented problems with A.I. in areas such as policing and healthcare. He also helps us understand the limits of A.I. as a cheating tool as well as its affordances for teaching genre in college writing. We go on to discuss some potentially concerning non-academic applications of algorithmic writing, before concluding that (as with A.I. policing and healthcare), the problems we might attribute to A.I. tend to be better understood as problems endemic to our structurally unequal society. Scott also points out that writing algorithms are certain to create conflicts between different people’s (and disciplines’) ideas about language and “authentic” subjectivity, and that rhetorical scholars have an important role to play in helping people outside our discipline understand why we tend to express skepticism at notions of “authenticity” in writing.

Works and Concepts Referenced in this Episode:

Scott’s great article in Inside Higher Ed: “AI-Generated Essays Are Nothing to Worry About”

Graham, S. S. (2015). The politics of pain medicine. In The Politics of Pain Medicine. University of Chicago Press.

Graham, S. S. (2020). Where's the Rhetoric? Imagining a Unified Field. The Ohio State University Press.

Graham, S. S. (2022). The Doctor and the Algorithm: Promise, Peril, and the Future of Health AI.

  continue reading

97 episodes

Tous les épisodes

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide