Artwork

Contenu fourni par Itzik Ben-Shabat. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Itzik Ben-Shabat ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

Beyond Periodicity - Sameera Ramasinghe

32:46
 
Partager
 

Manage episode 347072061 series 3300270
Contenu fourni par Itzik Ben-Shabat. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Itzik Ben-Shabat ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

In this episode of the Talking Papers Podcast, I hosted Sameera Ranasinghe. We had a great chat about his paper "Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs”, published in ECCV 2022 as an oral presentation.
In this paper, they propose a new family of activation functions for coordinate MLPs and provide a theoretical analysis of their effectiveness. Their main proposition is that the stable rank is a good measure and design tool for such activation functions. They show that their proposed activations outperform the traditional ReLU and Sine activations for image parametrization and novel view synthesis. They further show that while the proposed family of activations does not require positional encoding they can benefit from using it by reducing the number of layers significantly.

Sameera is currently an applied scientist at Amazon and the CTO and co-founder of ConscientAI. His research focus is theoretical machine learning and computer vision. This work was done when he was a postdoc at the Australian Institute of Machine Learning (AIML). He completed his PhD at the Australian National University (ANU). We first met back in 2019 when I was a research fellow at ANU and he was still doing his PhD. I immediately noticed we share research interests and after a short period of time, I flagged him as a rising star in the field. It was a pleasure to chat with Sameera and I am looking forward to reading his future papers.
AUTHORS

Sameera Ramasinghe, Simon Lucey
RELATED PAPERS

📚NeRF

📚SIREN

📚"Fourier Features Let Networks Learn High-Frequency Functions in Low Dimensional Domains"

📚On the Spectral Bias of Neural Networks
LINKS AND RESOURCES

📚 Paper

💻Code

To stay up to date with Marko's latest research, follow him on:

🐦Twitter

👨🏻‍🎓Google Scholar

👨🏻‍🎓LinkedIn
Recorded on November 14th 2022.

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

Artwork
iconPartager
 
Manage episode 347072061 series 3300270
Contenu fourni par Itzik Ben-Shabat. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Itzik Ben-Shabat ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

In this episode of the Talking Papers Podcast, I hosted Sameera Ranasinghe. We had a great chat about his paper "Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs”, published in ECCV 2022 as an oral presentation.
In this paper, they propose a new family of activation functions for coordinate MLPs and provide a theoretical analysis of their effectiveness. Their main proposition is that the stable rank is a good measure and design tool for such activation functions. They show that their proposed activations outperform the traditional ReLU and Sine activations for image parametrization and novel view synthesis. They further show that while the proposed family of activations does not require positional encoding they can benefit from using it by reducing the number of layers significantly.

Sameera is currently an applied scientist at Amazon and the CTO and co-founder of ConscientAI. His research focus is theoretical machine learning and computer vision. This work was done when he was a postdoc at the Australian Institute of Machine Learning (AIML). He completed his PhD at the Australian National University (ANU). We first met back in 2019 when I was a research fellow at ANU and he was still doing his PhD. I immediately noticed we share research interests and after a short period of time, I flagged him as a rising star in the field. It was a pleasure to chat with Sameera and I am looking forward to reading his future papers.
AUTHORS

Sameera Ramasinghe, Simon Lucey
RELATED PAPERS

📚NeRF

📚SIREN

📚"Fourier Features Let Networks Learn High-Frequency Functions in Low Dimensional Domains"

📚On the Spectral Bias of Neural Networks
LINKS AND RESOURCES

📚 Paper

💻Code

To stay up to date with Marko's latest research, follow him on:

🐦Twitter

👨🏻‍🎓Google Scholar

👨🏻‍🎓LinkedIn
Recorded on November 14th 2022.

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

Tüm bölümler

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide

Écoutez cette émission pendant que vous explorez
Lire