4.3 Data-driven penalties for optimal calibration of learning algorithms (Sylvain Arlot)


Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on April 19, 2019 09:37 (2y ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 188707054 series 1600644
Par Universite Paris 1 Pantheon-Sorbonne, découvert par Player FM et notre communauté - Le copyright est détenu par l'éditeur, non par Player F, et l'audio est diffusé directement depuis ses serveurs. Appuyiez sur le bouton S'Abonner pour suivre les mises à jour sur Player FM, ou collez l'URL du flux dans d'autre applications de podcasts.
Learning algorithms usually depend on one or several parameters that need to be chosen carefully. We tackle in this talk the question of designing penalties for an optimal choice of such regularization parameters in non-parametric regression. First, we consider the problem of selecting among several linear estimators, which includes model selection for linear regression, the choice of a regularization parameter in kernel ridge regression or spline smoothing, and the choice of a kernel in multiple kernel learning. We propose a new penalization procedure which first estimates consistently the variance of the noise, based upon the concept of minimal penalty which was previously introduced in the context of model selection. Then, plugging our variance estimate in Mallows? CL penalty is proved to lead to an algorithm satisfying an oracle inequality. Second, when data are heteroscedastic, we can show that dimensionality-based penalties are suboptimal for model selection in least-squares regression. So, the shape of the penalty itself has to be estimated. Resampling is used for building penalties robust to heteroscedasticity, without requiring prior information on the noise-level. For instance, V-fold penalization is shown to improve V-fold cross-validation for a fixed computational cost.

12 episodes