UnivIS
Informationssystem der Friedrich-Alexander-Universität Erlangen-Nürnberg © Config eG 
FAU Logo
  Sammlung/Stundenplan    Modulbelegung Home  |  Rechtliches  |  Kontakt  |  Hilfe    
Suche:      Semester:   
 
 Darstellung
 
Druckansicht

 
 
 Außerdem im UnivIS
 
Vorlesungs- und Modulverzeichnis nach Studiengängen

 
 
Veranstaltungskalender

Stellenangebote

Möbel-/Rechnerbörse

 
 
Vorlesungsverzeichnis >> Technische Fakultät (TF) >>

  Selected Topics on Machine Learning (STASC)

Dozent/in
Gastredner

Angaben
Vorlesung
, ECTS-Studium, ECTS-Credits: 5
nur Fachstudium, Sprache Englisch, Lecturer is Sergios Theodoridis (Professor of Signal Processing and Machine Learning in the Department of Informatics and Telecommunications of the University of Athens)
Zeit und Ort: Mo 8:15 - 9:45, 05.025; Di 14:15 - 15:45, 05.025; Mi 16:15 - 17:45, 05.025; Do 10:15 - 11:45, 05.025; Fr 12:15 - 13:45, 05.025
vom 24.6.2019 bis zum 26.7.2019

Studienfächer / Studienrichtungen
WPF CME-MA ab 2 (ECTS-Credits: 5)
PF ASC-MA 2 (ECTS-Credits: 5)

Inhalt
Introduction: What is Machine Learning, some typical examples

Learning in Parametric modelling- Basic Concepts: Parametric vs non-parametric modelling, regression, least squares, classification, supervised vs unsupervised and semisupervised learning, bias and unbiased estimation, MSE optimal estimation, the bias-variance trade-off, inverse problems and overfitting, regularization, the maximum likelihood method, curse of dimensionality, cross validation.

Classification-A tour to the Classics: Bayes optimal classification, minimum distance classifiers, the naïve Bayes classifier, risk optimal classification, nearest neighbor classifiers, logistic regression, decision trees.

Learning in reproducing kernel spaces: The need for nonlinear models, Cover’s theorem and capacity in linear dichotomies, reproducing kernel Hilbert spaces, kernels and the kernel trick, representer theorem, kernel ridge regression, support vector regression, margin classifiers and support vector machines.

Bayesian Learning: Maximum likelihood, Maximum a-posteriori, Bayesian approach, the evidence function and Occam’s razor rule, Laplacian approximation, the exponential family of probability distributions, latent variables and the EM algorithm, linear regression via the EM algorithm, Gaussian mixture models, variational approximation to Bayesian learning, variational Bayesian and linear regression, variational approach and mixture modelling, the relevance vector machine framework, Gaussian processes, nonparametric Bayesian learning: Chinese restaurant process and indian buffet process.

Neural Networks and deep learning: The perceptron and the perceptron rule, feed-forward neural networks, the backpropagation algorithm, selecting the cost function and the output nonlinearity, diminishing and expanding gradients, the ReLU activation function, pruning networks and the dropout method, universal approximation properties of neural networks, the need for deep architectures: representation, optimization and generalization properties, convolutional networks, convolution over volumes, 1X1 convolution, inception and residual networks, recurrent neural networks, adversarial training, transfer learning, generative adversarial networks, capsule modules, deep belief networks, autoencoders.

ECTS-Informationen:
Credits: 5

Contents
Introduction: What is Machine Learning, some typical examples

Learning in Parametric modelling- Basic Concepts: Parametric vs non-parametric modelling, regression, least squares, classification, supervised vs unsupervised and semisupervised learning, bias and unbiased estimation, MSE optimal estimation, the bias-variance trade-off, inverse problems and overfitting, regularization, the maximum likelihood method, curse of dimensionality, cross validation.

Classification-A tour to the Classics: Bayes optimal classification, minimum distance classifiers, the naïve Bayes classifier, risk optimal classification, nearest neighbor classifiers, logistic regression, decision trees.

Learning in reproducing kernel spaces: The need for nonlinear models, Cover’s theorem and capacity in linear dichotomies, reproducing kernel Hilbert spaces, kernels and the kernel trick, representer theorem, kernel ridge regression, support vector regression, margin classifiers and support vector machines.

Bayesian Learning: Maximum likelihood, Maximum a-posteriori, Bayesian approach, the evidence function and Occam’s razor rule, Laplacian approximation, the exponential family of probability distributions, latent variables and the EM algorithm, linear regression via the EM algorithm, Gaussian mixture models, variational approximation to Bayesian learning, variational Bayesian and linear regression, variational approach and mixture modelling, the relevance vector machine framework, Gaussian processes, nonparametric Bayesian learning: Chinese restaurant process and indian buffet process.

Neural Networks and deep learning: The perceptron and the perceptron rule, feed-forward neural networks, the backpropagation algorithm, selecting the cost function and the output nonlinearity, diminishing and expanding gradients, the ReLU activation function, pruning networks and the dropout method, universal approximation properties of neural networks, the need for deep architectures: representation, optimization and generalization properties, convolutional networks, convolution over volumes, 1X1 convolution, inception and residual networks, recurrent neural networks, adversarial training, transfer learning, generative adversarial networks, capsule modules, deep belief networks, autoencoders.

Zusätzliche Informationen
Schlagwörter: Machine Learning
Erwartete Teilnehmerzahl: 15, Maximale Teilnehmerzahl: 20

Verwendung in folgenden UnivIS-Modulen
Startsemester SS 2019:
Selected Topics in ASC (STASC)

Institution: Lehrstuhl für Multimediakommunikation und Signalverarbeitung
UnivIS ist ein Produkt der Config eG, Buckenhof