Artwork

Contenu fourni par Machine Learning Street Talk (MLST). Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Machine Learning Street Talk (MLST) ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

#107 - Dr. RAPHAËL MILLIÈRE - Linguistics, Theory of Mind, Grounding

1:43:54
 
Partager
 

Manage episode 357858476 series 2803422
Contenu fourni par Machine Learning Street Talk (MLST). Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Machine Learning Street Talk (MLST) ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Dr. Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. His research draws from his expertise in philosophy and cognitive science to explore the implications of recent progress in deep learning for models of human cognition, as well as various issues in ethics and aesthetics. He is also investigating what underlies the capacity to represent oneself as oneself at a fundamental level, in humans and non-human animals; as well as the role that self-representation plays in perception, action, and memory. In a world where technology is rapidly advancing, Dr. Millière is striving to gain a better understanding of how artificial neural networks work, and to establish fair and meaningful comparisons between humans and machines in various domains in order to shed light on the implications of artificial intelligence for our lives.

https://www.raphaelmilliere.com/

https://twitter.com/raphaelmilliere

Here is a version with hesitation sounds like "um" removed if you prefer (I didn't notice them personally): https://share.descript.com/view/aGelyTl2xpN

YT: https://www.youtube.com/watch?v=fhn6ZtD6XeE

TOC:

Intro to Raphael [00:00:00]

Intro: Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) [00:01:18]

Show Kick off [00:07:10]

LLMs [00:08:37]

Semantic Competence/Understanding [00:18:28]

Forming Analogies/JPG Compression Article [00:30:17]

Compositional Generalisation [00:37:28]

Systematicity [00:47:08]

Language of Thought [00:51:28]

Bigbench (Conceptual Combinations) [00:57:37]

Symbol Grounding [01:11:13]

World Models [01:26:43]

Theory of Mind [01:30:57]

Refs (this is truncated, full list on YT video description):

Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière)

https://nautil.us/moving-beyond-mimicry-in-artificial-intelligence-238504/

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (Bender et al)

https://dl.acm.org/doi/10.1145/3442188.3445922

ChatGPT Is a Blurry JPEG of the Web (Ted Chiang)

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

The Debate Over Understanding in AI's Large Language Models (Melanie Mitchell)

https://arxiv.org/abs/2210.13966

Talking About Large Language Models (Murray Shanahan)

https://arxiv.org/abs/2212.03551

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)

https://aclanthology.org/2020.acl-main.463/

The symbol grounding problem (Stevan Harnad)

https://arxiv.org/html/cs/9906002

Why the Abstraction and Reasoning Corpus is interesting and important for AI (Mitchell)

https://aiguide.substack.com/p/why-the-abstraction-and-reasoning

Linguistic relativity (Sapir–Whorf hypothesis)

https://en.wikipedia.org/wiki/Linguistic_relativity

Cooperative principle (Grice's four maxims of conversation - quantity, quality, relation, and manner)

https://en.wikipedia.org/wiki/Cooperative_principle

  continue reading

151 episodes

Artwork
iconPartager
 
Manage episode 357858476 series 2803422
Contenu fourni par Machine Learning Street Talk (MLST). Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Machine Learning Street Talk (MLST) ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Dr. Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. His research draws from his expertise in philosophy and cognitive science to explore the implications of recent progress in deep learning for models of human cognition, as well as various issues in ethics and aesthetics. He is also investigating what underlies the capacity to represent oneself as oneself at a fundamental level, in humans and non-human animals; as well as the role that self-representation plays in perception, action, and memory. In a world where technology is rapidly advancing, Dr. Millière is striving to gain a better understanding of how artificial neural networks work, and to establish fair and meaningful comparisons between humans and machines in various domains in order to shed light on the implications of artificial intelligence for our lives.

https://www.raphaelmilliere.com/

https://twitter.com/raphaelmilliere

Here is a version with hesitation sounds like "um" removed if you prefer (I didn't notice them personally): https://share.descript.com/view/aGelyTl2xpN

YT: https://www.youtube.com/watch?v=fhn6ZtD6XeE

TOC:

Intro to Raphael [00:00:00]

Intro: Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) [00:01:18]

Show Kick off [00:07:10]

LLMs [00:08:37]

Semantic Competence/Understanding [00:18:28]

Forming Analogies/JPG Compression Article [00:30:17]

Compositional Generalisation [00:37:28]

Systematicity [00:47:08]

Language of Thought [00:51:28]

Bigbench (Conceptual Combinations) [00:57:37]

Symbol Grounding [01:11:13]

World Models [01:26:43]

Theory of Mind [01:30:57]

Refs (this is truncated, full list on YT video description):

Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière)

https://nautil.us/moving-beyond-mimicry-in-artificial-intelligence-238504/

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (Bender et al)

https://dl.acm.org/doi/10.1145/3442188.3445922

ChatGPT Is a Blurry JPEG of the Web (Ted Chiang)

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

The Debate Over Understanding in AI's Large Language Models (Melanie Mitchell)

https://arxiv.org/abs/2210.13966

Talking About Large Language Models (Murray Shanahan)

https://arxiv.org/abs/2212.03551

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)

https://aclanthology.org/2020.acl-main.463/

The symbol grounding problem (Stevan Harnad)

https://arxiv.org/html/cs/9906002

Why the Abstraction and Reasoning Corpus is interesting and important for AI (Mitchell)

https://aiguide.substack.com/p/why-the-abstraction-and-reasoning

Linguistic relativity (Sapir–Whorf hypothesis)

https://en.wikipedia.org/wiki/Linguistic_relativity

Cooperative principle (Grice's four maxims of conversation - quantity, quality, relation, and manner)

https://en.wikipedia.org/wiki/Cooperative_principle

  continue reading

151 episodes

Tous les épisodes

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide