Artwork

Contenu fourni par Machine Learning Street Talk (MLST). Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Machine Learning Street Talk (MLST) ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

#99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

1:39:45
 
Partager
 

Manage episode 354539917 series 2803422
Contenu fourni par Machine Learning Street Talk (MLST). Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Machine Learning Street Talk (MLST) ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, and Carla's Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity.

Carla's “Democratizing Risk” paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration.

Aggregating people’s diverse ways of thinking about a problem and creating a risk-averse procedure gives a likely, highly probable outcome for having converged on the best policy. There needs to be a good reason to trust one organization with the risk management of humanity and all the different ways of thinking about risk must be taken into account. AI tools can help to scale this type of deliberative democracy, but the transparency of these algorithms must be taken into consideration.

The ambition of the EA community and Altruism Inc. is to protect and do risk management for the whole of humanity and this requires an institutional turn in order to do it effectively. The dangers of optimization are real, and it is essential to ensure that the risk management of humanity is done properly and ethically. By understanding the importance of aggregating people’s diverse ways of thinking about a problem, and creating a risk-averse procedure, it is possible to create a likely, highly probable outcome for having converged on the best policy.

Carla Zoe Cremer

https://carlacremer.github.io/

Igor Krawczuk

https://krawczuk.eu/

Interviewer: Dr. Tim Scarfe

TOC:

[00:00:00] Introduction: Vox article and effective altruism / FTX

[00:11:12] Luciano Floridi on Governance and Risk

[00:15:50] Connor Leahy on alignment

[00:21:08] Ethan Caballero on scaling

[00:23:23] Alignment, Values and politics

[00:30:50] Singularitarians vs AI-thiests

[00:41:56] Consequentialism

[00:46:44] Does scale make a difference?

[00:51:53] Carla's Democratising risk paper

[01:04:03] Vox article - How effective altruists ignored risk

[01:20:18] Does diversity breed complexity?

[01:29:50] Collective rationality

[01:35:16] Closing statements

  continue reading

151 episodes

Artwork
iconPartager
 
Manage episode 354539917 series 2803422
Contenu fourni par Machine Learning Street Talk (MLST). Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Machine Learning Street Talk (MLST) ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, and Carla's Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity.

Carla's “Democratizing Risk” paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration.

Aggregating people’s diverse ways of thinking about a problem and creating a risk-averse procedure gives a likely, highly probable outcome for having converged on the best policy. There needs to be a good reason to trust one organization with the risk management of humanity and all the different ways of thinking about risk must be taken into account. AI tools can help to scale this type of deliberative democracy, but the transparency of these algorithms must be taken into consideration.

The ambition of the EA community and Altruism Inc. is to protect and do risk management for the whole of humanity and this requires an institutional turn in order to do it effectively. The dangers of optimization are real, and it is essential to ensure that the risk management of humanity is done properly and ethically. By understanding the importance of aggregating people’s diverse ways of thinking about a problem, and creating a risk-averse procedure, it is possible to create a likely, highly probable outcome for having converged on the best policy.

Carla Zoe Cremer

https://carlacremer.github.io/

Igor Krawczuk

https://krawczuk.eu/

Interviewer: Dr. Tim Scarfe

TOC:

[00:00:00] Introduction: Vox article and effective altruism / FTX

[00:11:12] Luciano Floridi on Governance and Risk

[00:15:50] Connor Leahy on alignment

[00:21:08] Ethan Caballero on scaling

[00:23:23] Alignment, Values and politics

[00:30:50] Singularitarians vs AI-thiests

[00:41:56] Consequentialism

[00:46:44] Does scale make a difference?

[00:51:53] Carla's Democratising risk paper

[01:04:03] Vox article - How effective altruists ignored risk

[01:20:18] Does diversity breed complexity?

[01:29:50] Collective rationality

[01:35:16] Closing statements

  continue reading

151 episodes

Все серии

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide