Artwork

Contenu fourni par Daniel Filan. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Daniel Filan ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

17 - Training for Very High Reliability with Daniel Ziegler

1:00:59
 
Partager
 

Manage episode 338517759 series 2844728
Contenu fourni par Daniel Filan. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Daniel Filan ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

42 episodes

Artwork
iconPartager
 
Manage episode 338517759 series 2844728
Contenu fourni par Daniel Filan. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Daniel Filan ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

42 episodes

כל הפרקים

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide