Contenu fourni par Alexandre Andorra. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Alexandre Andorra ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !
Mettez-vous hors ligne avec l'application Player FM !
Podcasts qui valent la peine d'être écoutés
SPONSORISÉ
S
Skip Intro

1 Rebecca Ferguson (A HOUSE OF DYNAMITE) 39:04
39:04
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé39:04
For the first time, Skip Intro goes to The Paris Theater in Manhattan to sit down with Rebecca Ferguson who stars as Captain Olivia Walker in A HOUSE OF DYNAMITE . Directed by Oscar-winning Kathryn Bigelow, the film was screened this month at the historic 535-seat theater — NYC’s longest-running arthouse cinema. Ferguson shares what it was like to read the powerful script written by Noah Oppenheim and how working with Bigelow was unlike any other experience on set. The Mission Impossible actor also talks about jumping off the roof of the Vienna State Opera with Tom Cruise, Denis Villeneuve’s love of veils and jingle jangles in Dune , and teases Netflix’s upcoming Peaky Blinders movie with Cillian Murphy. Video episodes available on Still Watching Netflix YouTube Channel. Listen to more from Netflix Podcasts .…
Learning Bayesian Statistics
Tout marquer comme (non) lu
Manage series 2635823
Contenu fourni par Alexandre Andorra. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Alexandre Andorra ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
…
continue reading
176 episodes
Tout marquer comme (non) lu
Manage series 2635823
Contenu fourni par Alexandre Andorra. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Alexandre Andorra ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way, and I live in Estonia. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)!
…
continue reading
176 episodes
Tous les épisodes
×1 BITESIZE | Why is Bayesian Deep Learning so Powerful? 19:00
19:00
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé19:00
Today’s clip is from episode 144 of the podcast, with Maurizio Filippone. In this conversation, Alex and Maurizio delve into the intricacies of Gaussian processes and their deep learning counterparts. They explain the foundational concepts of Gaussian processes, the transition to deep Gaussian processes, and the advantages they offer in modeling complex data. The discussion also touches on practical applications, model selection, and the evolving landscape of machine learning, particularly in relation to transfer learning and the integration of deep learning techniques with Gaussian processes. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 #144 Why is Bayesian Deep Learning so Powerful, with Maurizio Filippone 1:28:22
1:28:22
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé1:28:22
Sign up for Alex's first live cohort , about Hierarchical Model building! Get 25% off "Building AI Applications for Data Scientists and Software Engineers" Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Why GPs still matter: Gaussian Processes remain a go-to for function estimation, active learning, and experimental design – especially when calibrated uncertainty is non-negotiable. Scaling GP inference: Variational methods with inducing points (as in GPflow) make GPs practical on larger datasets without throwing away principled Bayes. MCMC in practice: Clever parameterizations and gradient-based samplers tighten mixing and efficiency; use MCMC when you need gold-standard posteriors. Bayesian deep learning, pragmatically: Stochastic-gradient training and approximate posteriors bring Bayesian ideas to neural networks at scale. Uncertainty that ships: Monte Carlo dropout and related tricks provide fast, usable uncertainty – even if they’re approximations. Model complexity ≠ model quality: Understanding capacity, priors, and inductive bias is key to getting trustworthy predictions. Deep Gaussian Processes: Layered GPs offer flexibility for complex functions, with clear trade-offs in interpretability and compute. Generative models through a Bayesian lens: GANs and friends benefit from explicit priors and uncertainty – useful for safety and downstream decisions. Tooling that matters: Frameworks like GPflow lower the friction from idea to implementation, encouraging reproducible, well-tested modeling. Where we’re headed: The future of ML is uncertainty-aware by default – integrating UQ tightly into optimization, design, and deployment. Chapters : 08:44 Function Estimation and Bayesian Deep Learning 10:41 Understanding Deep Gaussian Processes 25:17 Choosing Between Deep GPs and Neural Networks 32:01 Interpretability and Practical Tools for GPs 43:52 Variational Methods in Gaussian Processes 54:44 Deep Neural Networks and Bayesian Inference 01:06:13 The Future of Bayesian Deep Learning 01:12:28 Advice for Aspiring Researchers 01:22:09 Tackling Global Issues with AI Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Links from the show: Maurizio's website: https://mauriziofilippone.github.io Maurizio on Google Scholar: https://scholar.google.com/citations?user=ILUeAloAAAAJ&hl=en GANs Secretly Perform Approximate Bayesian Model Selection: https://www.youtube.com/watch?v=pnfQ2_6jGl4 Videos of a couple of presentations on Bayesian Deep Learning: Aalto University 2023: https://www.youtube.com/watch?v=R2T3Z-Y3LXM AI Center in Prague 2023: https://www.youtube.com/watch?v=xE7TaQeLAXE UC Irvine 2022: https://www.youtube.com/watch?v=oZAuh686ipw Video of the discussion of the paper “Deep Gaussian Processes for Calibration of Computer Models” published in Bayesian Analysis as a discussion paper: https://www.youtube.com/watch?v=K_hPbvoo0_M Lecture on Deep Gaussian Processes at DeepBayes 2019: https://www.youtube.com/watch?v=750fRY9-uq8 Lecture on Gaussian Processes at Deep Bayes 2018: https://www.youtube.com/watch?v=zBEV5ezyYmI A tutorial on GPs with E. V. Bonilla at IJCAI in 2021: https://ebonilla.github.io/gaussianprocesses/ PyData Tutorial, Mastering Gaussian Processes with PyMC: https://github.com/AlexAndorra/advanced-gp-pydata# LBS #136 Bayesian Inference at Scale: Unveiling INLA, with Haavard Rue & Janet van Niekerk: https://learnbayesstats.com/episode/136-bayesian-inference-at-scale-unveiling-inla-haavard-rue-janet-van-niekerk LBS #129 Bayesian Deep Learning & AI for Science with Vincent Fortuin: https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuin LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt GPFlow documentation: https://www.gpflow.org/ PyTorch docs: https://pytorch.org/ Pyro docs: https://pyro.ai/ Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 BITESIZE | Are Bayesian Models the Missing Ingredient in Nutrition Research? 23:14
23:14
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé23:14
Sign up for Alex's first live cohort, about Hierarchical Model building Soccer Factor Model Dashboard Today’s clip is from episode 143 of the podcast, with Christoph Bamberg. Christoph shares his journey into Bayesian statistics and computational modeling, the challenges faced in academia, and the technical tools used in research. Alex and Christoph delve into a specific study on appetite regulation and cognitive performance, exploring the implications of framing in psychological research and the importance of careful communication in health-related contexts. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 #143 Transforming Nutrition Science with Bayesian Methods, with Christoph Bamberg 1:12:56
1:12:56
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé1:12:56
Sign up for Alex's first live cohort , about Hierarchical Model building! Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Bayesian mindset in psychology: Why priors, model checking, and full uncertainty reporting make findings more honest and useful. Intermittent fasting & cognition: A Bayesian meta-analysis suggests effects are context- and age-dependent – and often small but meaningful. Framing matters: The way we frame dietary advice (focus, flexibility, timing) can shape adherence and perceived cognitive benefits. From cravings to choices: Appetite, craving, stress, and mood interact to influence eating and cognitive performance throughout the day. Define before you measure: Clear definitions (and DAGs to encode assumptions) reduce ambiguity and guide better study design. DAGs for causal thinking: Directed acyclic graphs help separate hypotheses from data pipelines and make causal claims auditable. Small effects, big implications: Well-estimated “small” effects can scale to public-health relevance when decisions repeat daily. Teaching by modeling: Helping students write models (not just run them) builds statistical thinking and scientific literacy. Bridging lab and life: Balancing careful experiments with real-world measurement is key to actionable health-psychology insights. Trust through transparency: Openly communicating assumptions, uncertainty, and limitations strengthens scientific credibility. Chapters : 10:35 The Struggles of Bayesian Statistics in Psychology 22:30 Exploring Appetite and Cognitive Performance 29:45 Research Methodology and Causal Inference 36:36 Understanding Cravings and Definitions 39:02 Intermittent Fasting and Cognitive Performance 42:57 Practical Recommendations for Intermittent Fasting 49:40 Balancing Experimental Psychology and Statistical Modeling 55:00 Pressing Questions in Health Psychology 01:04:50 Future Directions in Research Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Links from the show: Sign up for Alex's first live cohort, about Hierarchical Model building: https://athlyticz.com/cohorts/alex-andorra/hierarchical Christoph’s website: https://christophbg.github.io/ Christoph on LinkedIn: https://www.linkedin.com/in/christoph-bamberg-2249a9a9/ The impact of dietary claims on behaviour – Expectations qualify how actual satiety affects cognitive performance: https://www.sciencedirect.com/science/article/pii/S0195666324006275 Bayesian meta analysis on the effects of intermittent fasting on cognitive performance: https://doi.org/10.31234/osf.io/tgwep_v1 randomised controlled trial on intermittent fasting, cognitive performance and mood: https://doi.org/10.1177/13591053251351204 LBS #35 The Past, Present & Future of BRMS, with Paul Bürkner: https://learnbayesstats.com/episode/35-past-present-future-brms-paul-burkner LBS #112 Advanced Bayesian Regression, with Tomi Capretto: https://learnbayesstats.com/episode/112-advanced-bayesian-regression-tomi-capretto LBS #89 Unlocking the Science of Exercise, Nutrition & Weight Management, with Eric Trexler: https://learnbayesstats.com/episode/89-unlocking-science-exercise-nutrition-weight-management-eric-trexler LBS #137 Causal AI & Generative Models, with Robert Ness: https://learnbayesstats.com/episode/137-causal-ai-generative-models-robert-ness Decentralised Construct Taxonomy: https://psycore.one/ and an article about it: https://doi.org/10.15626/MP.2022.3638 Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 BITESIZE | How Bayesian Additive Regression Trees Work in Practice 22:49
22:49
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé22:49
Soccer Factor Model Dashboard Unveiling True Talent: The Soccer Factor Model for Skill Evaluation LBS #91, Exploring European Football Analytics, with Max Göbel Get early access to Alex's next live-cohort courses ! Today’s clip is from episode 142 of the podcast, with Gabriel Stechschulte. Alex and Garbriel explore the re-implementation of BART (Bayesian Additive Regression Trees) in Rust, detailing the technical challenges and performance improvements achieved. They also share insights into the benefits of BART, such as uncertainty quantification, and its application in various data-intensive fields. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 #142 Bayesian Trees & Deep Learning for Optimization & Big Data, with Gabriel Stechschulte 1:10:28
1:10:28
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé1:10:28
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Get early access to Alex's next live-cohort courses ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: BART as a core tool: Gabriel explains how Bayesian Additive Regression Trees provide robust uncertainty quantification and serve as a reliable baseline model in many domains. Rust for performance: His Rust re-implementation of BART dramatically improves speed and scalability, making it feasible for larger datasets and real-world IoT applications. Strengths and trade-offs: BART avoids overfitting and handles missing data gracefully, though it is slower than other tree-based approaches. Big data meets Bayes: Gabriel shares strategies for applying Bayesian methods with big data, including when variational inference helps balance scale with rigor. Optimization and decision-making: He highlights how BART models can be embedded into optimization frameworks, opening doors for sequential decision-making. Open source matters: Gabriel emphasizes the importance of communities like PyMC and Bambi, encouraging newcomers to start with small contributions. Chapters : 05:10 – From economics to IoT and Bayesian statistics 18:55 – Introduction to BART (Bayesian Additive Regression Trees) 24:40 – Re-implementing BART in Rust for speed and scalability 32:05 – Comparing BART with Gaussian Processes and other tree methods 39:50 – Strengths and limitations of BART 47:15 – Handling missing data and different likelihoods 54:30 – Variational inference and big data challenges 01:01:10 – Embedding BART into optimization and decision-making frameworks 01:08:45 – Open source, PyMC, and community support 01:15:20 – Advice for newcomers 01:20:55 – Future of BART, Rust, and probabilistic programming Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Links from the show: Gabriel’s website: https://gstechschulte.github.io/ Gabriel on LinkedIn: https://www.linkedin.com/in/gabrielstechschulte/ Gabriel on GitHub: https://github.com/GStechschulte Gabriel on Blue Sky: https://bsky.app/profile/gstechschulte.bsky.social Gabriel on Google Scholar: https://scholar.google.com/citations?user=ood-6GIAAAAJ&hl=en Rust implementation of PyMC-BART: https://github.com/GStechschulte/bart-rs PyMC BART: https://www.pymc.io/projects/bart/en/latest/ Reproducing Uber's Marketplace Optimization: https://gstechschulte.github.io/posts/2025-09-15-marketplace-optimization/ Alternating Direction Method of Multipliers (ADMM) for distributed budget allocation: https://github.com/GStechschulte/uber-admm A Beginner's Guide to Variational Inference | PyData Virginia 2025: https://youtu.be/XECLmgnS6Ng?feature=shared Associated GitHub repo: https://github.com/fonnesbeck/vi_pydata_virginia_2025 Bambi: https://bambinos.github.io/bambi/ Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 BITESIZE | How Probability Becomes Causality? 22:03
22:03
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé22:03
Get early access to Alex's next live-cohort courses ! Today’s clip is from episode 141 of the podcast, with Sam Witty. Alex and Sam discuss the ChiRho project, delving into the intricacies of causal inference, particularly focusing on Do-Calculus, regression discontinuity designs, and Bayesian structural causal inference. They explain ChiRho's design philosophy, emphasizing its modular and extensible nature, and highlights the importance of efficient estimation in causal inference, making complex statistical methods accessible to users without extensive expertise. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 #141 AI Assisted Causal Inference, with Sam Witty 1:37:47
1:37:47
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé1:37:47
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Get early access to Alex's next live-cohort courses ! Enroll in the Causal AI workshop , to learn live with Alex (15% off if you're a Patron of the show) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Causal inference is crucial for understanding the impact of interventions in various fields. ChiRho is a causal probabilistic programming language that bridges mechanistic and data-driven models. ChiRho allows for easy manipulation of causal models and counterfactual reasoning. The design of ChiRho emphasizes modularity and extensibility for diverse applications. Causal inference requires careful consideration of assumptions and model structures. Real-world applications of causal inference can lead to significant insights in science and engineering. Collaboration and communication are key in translating causal questions into actionable models. The future of causal inference lies in integrating probabilistic programming with scientific discovery. Chapters : 05:53 Bridging Mechanistic and Data-Driven Models 09:13 Understanding Causal Probabilistic Programming 12:10 ChiRho and Its Design Principles 15:03 ChiRho’s Functionality and Use Cases 17:55 Counterfactual Worlds and Mediation Analysis 20:47 Efficient Estimation in ChiRho 24:08 Future Directions for Causal AI 50:21 Understanding the Do-Operator in Causal Inference 56:45 ChiRho’s Role in Causal Inference and Bayesian Modeling 01:01:36 Roadmap and Future Developments for ChiRho 01:05:29 Real-World Applications of Causal Probabilistic Programming 01:10:51 Challenges in Causal Inference Adoption 01:11:50 The Importance of Causal Claims in Research 01:18:11 Bayesian Approaches to Causal Inference 01:22:08 Combining Gaussian Processes with Causal Inference 01:28:27 Future Directions in Probabilistic Programming and Causal Inference Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Links from the show: Sam’s website: https://samwitty.github.io/ Sam on LinkedIn: https://www.linkedin.com/in/sam-witty-46708572/ Sam on GitHub: https://github.com/SamWitty ChiRho docs: https://basisresearch.github.io/chirho/getting_started.html Causal Inference using Gaussian Processes with Structured Latent Confounders: https://proceedings.mlr.press/v119/witty20a/witty20a.pdf Automated Efficient Estimation using Monte Carlo Efficient Influence Functions: https://proceedings.neurips.cc/paper_files/paper/2024/file/1d10fe211f5139de49f94c6f0c7cecbe-Paper-Conference.pdf PhD Thesis: https://samwitty.github.io/papers/Witty_Dissertation.pdf LBS #137 Causal AI & Generative Models, with Robert Ness: https://learnbayesstats.com/episode/137-causal-ai-generative-models-robert-ness Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 BITESIZE | How to Think Causally About Your Models? 24:01
24:01
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé24:01
Get early access to Alex's next live-cohort courses ! Today’s clip is from episode 140 of the podcast, with Ron Yurko. Alex and Ron discuss the challenges of model deployment, and the complexities of modeling player contributions in team sports like soccer and football. They emphasize the importance of understanding replacement levels, the Going Deep framework in football analytics, and the need for proper modeling of expected points. Additionally, they share insights on teaching Bayesian modeling to students and the difficulties they face in grasping the concepts of model writing and application. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 #140 NFL Analytics & Teaching Bayesian Stats, with Ron Yurko 1:33:01
1:33:01
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé1:33:01
Get early access to Alex's next live-cohort courses ! Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Teaching students to write out their own models is crucial. Developing a sports analytics portfolio is essential for aspiring analysts. Modeling expectations in sports analytics can be misleading. Tracking data can significantly improve player performance models. Ron encourages students to engage in active learning through projects. The importance of understanding the dependency structure in data is vital. Ron aims to integrate more diverse sports analytics topics into his teaching. Chapters : 03:51 The Journey into Sports Analytics 15:20 The Evolution of Bayesian Statistics in Sports 26:01 Innovations in NFL WAR Modeling 39:23 Causal Modeling in Sports Analytics 46:29 Defining Replacement Levels in Sports 48:26 The Going Deep Framework and Big Data in Football 52:47 Modeling Expectations in Football Data 55:40 Teaching Statistical Concepts in Sports Analytics 01:01:54 The Importance of Model Building in Education 01:04:46 Statistical Thinking in Sports Analytics 01:10:55 Innovative Research in Player Movement 01:15:47 Exploring Data Needs in American Football 01:18:43 Building a Sports Analytics Portfolio Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Links from the show: Ron’ website: https://www.stat.cmu.edu/~ryurko/ Ron on LinkedIn: https://www.linkedin.com/in/ron-yurko-stats/ Ron on GitHub: https://github.com/ryurko Ron’s Substack: https://substack.com/@ronyurko Ron on Google Scholar: https://scholar.google.com/citations?user=CBT7NWQAAAAJ&hl=en nflWAR – A Reproducible Method for Offensive Player Evaluation in Football: https://arxiv.org/abs/1802.00998 Going Deep – Models for Continuous-Time Within-Play Valuation of Game Outcomes in American Football with Tracking Data: https://arxiv.org/abs/1906.01760 A Bayesian circular mixed-effects model for explaining variability in directional movement in American football: https://arxiv.org/abs/2507.06122 LBS #42 – How to Teach and Learn Bayesian Stats, with Mine Dogucu: https://learnbayesstats.com/episode/42-teach-bayesian-stats-mine-dogucu Unveiling True Talent – The Soccer Factor Model for Skill: https://github.com/AlexAndorra/football-modeling/tree/main/01_SFM LBS #108 – Modeling Sports & Extracting Player Values, with Paul Sabin: https://learnbayesstats.com/episode/108-modeling-sports-extracting-player-values-paul-sabin CMU Sport Analytics Conference: https://www.stat.cmu.edu/cmsac/conference Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 BITESIZE | Is Bayesian Optimization the Answer? 25:13
25:13
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé25:13
Today’s clip is from episode 139 of the podcast, with with Max Balandat. Alex and Max discuss the integration of BoTorch with PyTorch, exploring its applications in Bayesian optimization and Gaussian processes. They highlight the advantages of using GPyTorch for structured matrices and the flexibility it offers for research. The discussion also covers the motivations behind building BoTorch, the importance of open-source culture at Meta, and the role of PyTorch in modern machine learning. Get the full discussion here . Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 #139 Efficient Bayesian Optimization in PyTorch, with Max Balandat 1:25:23
1:25:23
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé1:25:23
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: BoTorch is designed for researchers who want flexibility in Bayesian optimization. The integration of BoTorch with PyTorch allows for differentiable programming. Scalability at Meta involves careful software engineering practices and testing. Open-source contributions enhance the development and community engagement of BoTorch. LLMs can help incorporate human knowledge into optimization processes. Max emphasizes the importance of clear communication of uncertainty to stakeholders. The role of a researcher in industry is often more application-focused than in academia. Max's team at Meta works on adaptive experimentation and Bayesian optimization. Chapters : 08:51 Understanding BoTorch 12:12 Use Cases and Flexibility of BoTorch 15:02 Integration with PyTorch and GPyTorch 17:57 Practical Applications of BoTorch 20:50 Open Source Culture at Meta and BoTorch's Development 43:10 The Power of Open Source Collaboration 47:49 Scalability Challenges at Meta 51:02 Balancing Depth and Breadth in Problem Solving 55:08 Communicating Uncertainty to Stakeholders 01:00:53 Learning from Missteps in Research 01:05:06 Integrating External Contributions into BoTorch 01:08:00 The Future of Optimization with LLMs Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Links from the show: Max on Linkedin: https://www.linkedin.com/in/maximilian-balandat-b5843946/ Max on GitHub: https://github.com/Balandat BoTorch – Bayesian Optimization in PyTorch: https://botorch.org/ BoTorch – A Framework for Efficient Monte-Carlo Bayesian Optimization: https://arxiv.org/pdf/1910.06403 Ax – A higher level, user-friendly black-box optimization tool that heavily leverages BoTorch: https://ax.dev/ Ax – A Platform for Adaptive Experimentation: https://openreview.net/forum?id=U1f6wHtG1g#discussion PyTorch: https://docs.pytorch.org/docs/stable/index.html GPyTorch: https://gpytorch.ai/ Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 BITESIZE | What's Missing in Bayesian Deep Learning? 20:34
20:34
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé20:34
Today’s clip is from episode 138 of the podcast, with Mélodie Monod, François-Xavier Briol and Yingzhen Li. During this live show at Imperial College London, Alex and his guests delve into the complexities and advancements in Bayesian deep learning, focusing on uncertainty quantification, the integration of machine learning tools, and the challenges faced in simulation-based inference. The speakers discuss their current projects, the evolution of Bayesian models, and the need for better computational tools in the field. Get the full discussion here . Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 #138 Quantifying Uncertainty in Bayesian Deep Learning, Live from Imperial College London 1:23:10
1:23:10
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé1:23:10
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Bayesian deep learning is a growing field with many challenges. Current research focuses on applying Bayesian methods to neural networks. Diffusion methods are emerging as a new approach for uncertainty quantification. The integration of machine learning tools into Bayesian models is a key area of research. The complexity of Bayesian neural networks poses significant computational challenges. Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty. Uncertainty quantification is crucial in fields like medicine and epidemiology. Detecting out-of-distribution examples is essential for model reliability. Exploration-exploitation trade-off is vital in reinforcement learning. Marginal likelihood can be misleading for model selection. The integration of Bayesian methods in LLMs presents unique challenges. Chapters: 00:00 Introduction to Bayesian Deep Learning 03:12 Panelist Introductions and Backgrounds 10:37 Current Research and Challenges in Bayesian Deep Learning 18:04 Contrasting Approaches: Bayesian vs. Machine Learning 26:09 Tools and Techniques for Bayesian Deep Learning 31:18 Innovative Methods in Uncertainty Quantification 36:23 Generalized Bayesian Inference and Its Implications 41:38 Robust Bayesian Inference and Gaussian Processes 44:24 Software Development in Bayesian Statistics 46:51 Understanding Uncertainty in Language Models 50:03 Hallucinations in Language Models 53:48 Bayesian Neural Networks vs Traditional Neural Networks 58:00 Challenges with Likelihood Assumptions 01:01:22 Practical Applications of Uncertainty Quantification 01:04:33 Meta Decision-Making with Uncertainty 01:06:50 Exploring Bayesian Priors in Neural Networks 01:09:17 Model Complexity and Data Signal 01:12:10 Marginal Likelihood and Model Selection 01:15:03 Implementing Bayesian Methods in LLMs 01:19:21 Out-of-Distribution Detection in LLMs Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen . Dr. Mélodie Monod (Imperial College London, School of Public Health) Mélodie completed her PhD as part of the EPSRC Modern Statistics and Statistical Machine Learning program at Imperial College London, transitioned to Novartis as Principal Biostatistician, and is currently a Postdoctoral Researcher in Machine Learning at Imperial. Her research includes diffusion models, Bayesian deep learning, non-parametric Bayesian statistics and pandemic modelling. For more details, see her Google Scholar Publications page. Dr. François-Xavier Briol (University College London, Department of Statistical Science) F-X is Associate Professor in the Department of Statistical Science at University College London , where he leads the Fundamentals of Statistical Machine Learning research group and is co-director of the UCL ELLIS unit . His research focuses on developing statistical and machine learning methods for the sciences and engineering, with his recent work focusing on Bayesian computation and robustness to model misspecification. For more details, see his Google Scholar page. Dr. Yingzhen Li (Imperial College London, Department of Computing) Yingzhen is Associate Professor in Machine Learning at the Department of Computing at Imperial College London , following several years at Microsoft Research Cambridge as senior researcher. Her research focuses on building reliable machine learning systems which can generalise to unseen environments, including topics such as (deep) probabilistic graphical model design, fast and accurate (Bayesian) inference/computation techniques, uncertainty quantification for computation and downstream tasks, and robust and adaptive machine learning systems. For more details, see her Google Scholar Publications page. Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
1 BITESIZE | Practical Applications of Causal AI with LLMs, with Robert Ness 25:28
25:28
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé25:28
Today’s clip is from episode 137 of the podcast, with Robert Ness. Alex and Robert discuss the intersection of causal inference and deep learning, emphasizing the importance of understanding causal concepts in statistical modeling. The discussion also covers the evolution of probabilistic machine learning, the role of inductive biases, and the potential of large language models in causal analysis, highlighting their ability to translate natural language into formal causal queries. Get the full conversation here . Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
Bienvenue sur Lecteur FM!
Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.



















