Foundations for Maschine Learning (SS2018): Unterschied zwischen den Versionen

Aus International Center for Computational Logic
Wechseln zu:Navigation, Suche
Romy Thieme (Diskussion | Beiträge)
Keine Bearbeitungszusammenfassung
Romy Thieme (Diskussion | Beiträge)
Keine Bearbeitungszusammenfassung
Zeile 34: Zeile 34:
Minimization) learning rule, the No-Free-Lunch Theorem, and
Minimization) learning rule, the No-Free-Lunch Theorem, and
the VC (Vapnik-Chervonenkis) dimension. The course will end with deep learning.
the VC (Vapnik-Chervonenkis) dimension. The course will end with deep learning.
===Prerequisites===
Probability Theory
Linear Algebra
Algorithm Design & Analysis
|Literature=* Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.
|Literature=* Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.


* Ian Goodfellow and Yoshua Bengio and Aaron Courville. Deep Learning. MIT Press, 2016.
* Ian Goodfellow and Yoshua Bengio and Aaron Courville. Deep Learning. MIT Press, 2016.
}}
}}

Version vom 14. März 2018, 10:28 Uhr

Foundations for Maschine Learning

Lehrveranstaltung mit SWS 4/2/0 (Vorlesung/Übung/Praktikum) in SS 2018

Dozent

  • Yohanes Stefanus

Umfang (SWS)

  • 4/2/0

Module

Leistungskontrolle

  • Referat


The lecture will take place from 11th June till 20th July 2018.

2 Credit Points can be achieved.


Content

The topic of this course is mathematical foundations for Machine Learning. We define the term "machine learning" to mean the automated detection of meaningful patterns in data.

Nowadays machine learning based technologies are ubiquitous: digital economic systems, web search engines, anti-spam software, credit/insurance fraud detection software, accident prevention systems, bioinformatics, etc.

This course provides a theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms, such as algorithms appropriate for big data learning. We will start with Valiant's PAC (Probably Approximately Correct) learning model, the ERM (Empirical Risk Minimization) learning rule, the No-Free-Lunch Theorem, and the VC (Vapnik-Chervonenkis) dimension. The course will end with deep learning.


Prerequisites

Probability Theory Linear Algebra

Algorithm Design & Analysis
  • Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.
  • Ian Goodfellow and Yoshua Bengio and Aaron Courville. Deep Learning. MIT Press, 2016.