Artwork

Contenu fourni par Itzik Ben-Shabat. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Itzik Ben-Shabat ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

Reverse Engineering SSL - Ravid Shwartz-Ziv

1:08:59
 
Partager
 

Manage episode 385086437 series 3300270
Contenu fourni par Itzik Ben-Shabat. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Itzik Ben-Shabat ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Welcome to another exciting episode of the Talking Papers Podcast! In this episode, we delve into the fascinating world of self-supervised learning with our special guest, Ravid Shwartz-Ziv. Together, we explore and dissect their research paper titled "Reverse Engineering Self-Supervised Learning," published in NeurIPS 2023.
Self-supervised learning (SSL) has emerged as a game-changing technique in the field of machine learning. However, understanding the learned representations and their underlying mechanisms has remained a challenge - until now. Ravid Shwartz-Ziv's paper provides an in-depth empirical analysis of SSL-trained representations, encompassing various models, architectures, and hyperparameters.
The study uncovers a captivating aspect of the SSL training process - its inherent ability to facilitate the clustering of samples based on semantic labels. Surprisingly, this clustering is driven by the regularization term in the SSL objective. Not only does this process enhance downstream classification performance, but it also exhibits a remarkable power of data compression. The paper further establishes that SSL-trained representations align more closely with semantic classes than random classes, even across different hierarchical levels. What's more, this alignment strengthens during training and as we venture deeper into the network.
Join us as we discuss the insights gained from this exceptional research. One remarkable aspect of the paper is its departure from the trend of focusing solely on outperforming competitors. Instead, it dives deep into understanding the semantic clustering effect of SSL techniques, shedding light on the underlying capabilities of the tools we commonly use. It is truly a genre of research that holds immense value.
During our conversation, Ravid Shwartz-Ziv - a CDS Faculty Fellow at NYU Center for Data Science - shares their perspectives and insights, providing an enriching layer to our exploration. Interestingly, despite both of us being in Israel at the time of recording, we had never met in person, highlighting the interconnectedness and collaborative nature of the academic world.
Don't miss this thought-provoking episode that promises to expand your understanding of self-supervised learning and its impact on representation learning mechanisms. Subscribe to our channel now, join the discussion, and let us know your thoughts in the comments below!
All links and resources are available in the blogpost: https://www.itzikbs.com/revenge_ssl

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

Artwork
iconPartager
 
Manage episode 385086437 series 3300270
Contenu fourni par Itzik Ben-Shabat. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Itzik Ben-Shabat ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Welcome to another exciting episode of the Talking Papers Podcast! In this episode, we delve into the fascinating world of self-supervised learning with our special guest, Ravid Shwartz-Ziv. Together, we explore and dissect their research paper titled "Reverse Engineering Self-Supervised Learning," published in NeurIPS 2023.
Self-supervised learning (SSL) has emerged as a game-changing technique in the field of machine learning. However, understanding the learned representations and their underlying mechanisms has remained a challenge - until now. Ravid Shwartz-Ziv's paper provides an in-depth empirical analysis of SSL-trained representations, encompassing various models, architectures, and hyperparameters.
The study uncovers a captivating aspect of the SSL training process - its inherent ability to facilitate the clustering of samples based on semantic labels. Surprisingly, this clustering is driven by the regularization term in the SSL objective. Not only does this process enhance downstream classification performance, but it also exhibits a remarkable power of data compression. The paper further establishes that SSL-trained representations align more closely with semantic classes than random classes, even across different hierarchical levels. What's more, this alignment strengthens during training and as we venture deeper into the network.
Join us as we discuss the insights gained from this exceptional research. One remarkable aspect of the paper is its departure from the trend of focusing solely on outperforming competitors. Instead, it dives deep into understanding the semantic clustering effect of SSL techniques, shedding light on the underlying capabilities of the tools we commonly use. It is truly a genre of research that holds immense value.
During our conversation, Ravid Shwartz-Ziv - a CDS Faculty Fellow at NYU Center for Data Science - shares their perspectives and insights, providing an enriching layer to our exploration. Interestingly, despite both of us being in Israel at the time of recording, we had never met in person, highlighting the interconnectedness and collaborative nature of the academic world.
Don't miss this thought-provoking episode that promises to expand your understanding of self-supervised learning and its impact on representation learning mechanisms. Subscribe to our channel now, join the discussion, and let us know your thoughts in the comments below!
All links and resources are available in the blogpost: https://www.itzikbs.com/revenge_ssl

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

כל הפרקים

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide

Écoutez cette émission pendant que vous explorez
Lire