2020/2021

***Instructors:Prof. Nicolas VAYATIS

TAs : Marie Garin, Batiste Le Bars

***Emails:

<firstname.name/-at-/ens-paris-saclay.fr>

***Syllabus 2020-2021

***Course schedule and location:

Amphi Lagrange 1Z14, ENS Paris-Saclay --> how to get there

!!Special time and location for Exercise session #2!!

Date | Time | Instructor | Session | Topics | Files/Readings |

Tuesday October 6 |
11:00-13:00 | N. Vayatis | Lecture #1 | Course introduction and main setup Chapter 1 - Optimality in statistical learning |
Slides |

Tuesday October 13 |
11:00-13:00 | N. Vayatis | Lecture #2 | Chapter 1 - Optimality in statistical learning |
Slides |

Tuesday October 20 |
11:00-13:00 | TA | Exercise session #1 | Optimal elements and excess risk bounds |
Sheet |

Tuesday November 3 |
11:00-13:00 | N. Vayatis | Lecture #3 | Chapter 2 - Mathematical foundations (I) Probabilistic inequalities, complexity measures |
Slides |

Friday November 6 |
14:00-16:00 |
TA |
Exercise session #2 | Inequalities, Rademacher complexity, VC dimension |
Sheet |

Tuesday November 10 |
11:00-13:00 | Partial exam - MandatoryNo documents allowed |
|||

Tuesday November 17 |
11:00-13:00 | N. Vayatis | Lecture #4 | Chapter 2 - Mathematical foundations (II) Regularization and stability |
Slides |

Tuesday November 24 |
11:00-13:00 | N. Vayatis | Lecture #5 | Chapter 3 - Consistency of Machine Learning methods
(I) Boosting, SVM |
Slides |

Tuesday December 1 | 11:00-13:00 | TA |
Exercise session #3 | Consistency and convergence bounds |
Sheet |

Tuesday December 8 |
11:00-13:00 | N. Vayatis | Lecture #6 | Chapter 3 - Consistency of Machine Learning methods
(II) Neural networks, Bagging, Random Forests |
Slides |

Tuesday December 15 | 11:00-13:00 | TA |
Exercise session #4 | Course wrap-up |
Sheet |

Tuesday January 5 |
11:00-13:00 | Final exam - MandatoryDocuments allowed |

***Office hours:

Tuesdays 1pm/2pm

***Evaluation:

Partial exam on November 10 (11am-1pm) - MANDATORY

Final exam on January 5 (11am-1pm) - MANDATORY

***Grading:

Course grade = max(final ; (partial+final)/2)

*************

From information theory to statistical learning

The probabilistic view on numerical data,

Optimality and the bias-variance dilemma

Chapter 2 - The mathematical foundations of statistical learning:

Risk minimization

Concentration inequalities

Complexity measures

(Explicit) Regularization

(Stability)

Chapter 3 - Theory: Consistency theorems and error bounds of learning algorithms

Part 1 - SVM, Boosting

Part 2 - Neural Networks, Bagging, Random Forests

(Chapter 4 - Advanced topics)

Disclaimer: this document was offered by a former student, some mistakes may be found in the document.

Final exam 2018 / Partial exam 2018 / Final exam 2017 / Partial exam 2017 / Final exam 2016 / Partial exam 2016

References

- S. Boucheron, O. Bousquet, and G. Lugosi. Theory of Classification: a Survey of Recent Advances. ESAIM: Probability and Statistics, 9:323375, 2005.
- L. Devroye, L. Györfi, G. Lugosi, A Probabilistic Theory
of Pattern Recognition, Springer, New York,
*1996.* - G. Lugosi,
*Principles of Nonparametric Learning*Springer, Wien, New York, pp. 1--56, 2002. - M. Mohri, A. Rostamizadeh, A. Talwalkar. Foundations of Machine Learning, The MIT Press, 2012.
- S. Shalev-Schwartz, S. Ben-David. Understanding Machine
Learning: From Theory to Algorithms.Cambridge University
Press, 2014.