HCI: Heidelberg Collaboratory for Image Processing
Ruprecht-Karls-Universitšt Heidelberg

Master-Pflichtseminar
Deep Learning in Artificial Neural Networks

Fakultät für Physik und Astronomie


Neural network, A.D. 1970

Neural network reading digits, A.D. 1998


(http://yann.lecun.com/exdb/lenet/)

Neural network "dreaming", A.D. 2015


(Google inceptionism)

Given a training set of examples and their true class, artificial neural networks can be made to automatically learn rules that, given a query, will make a (hopefully) correct prediction. Artificial neural networks have been the first machine learning algorithms and, after repeated cycles of hype and bust, have seen enormous development and success in the last three years. They achieve the best results on a number of hard benchmarks, and outperform humans in nontrivial tasks. It is unsurprising that the likes of google, facebook and amazon are pouring heaps of money into research departments focusing on neural networks.

We will, first, study the basics of machine learning and artificial neural networks; and will then try and build partial intuition for how and why the recent "deep" architectures work. To get a feeling for the kind of work we will study, please browse the papers below before opting for this pretty technical seminar.

This seminar is eligible as part of the Specialization in "Computational Physics".

Format

You will give a 40 min talk and prepare a written summary of your topic.

The seminar is going to be challenging, but will bring you in contact with the forefront of an important research area. The seminar is on Tuesdays from 11:30 to 13:00, starting on October 13th, 2015 in the HCI, Speyerer Strasse 6. To register, send email to Fred Hamprecht. Welcome to the ride :-)

Topics (will be updated later this summer, along with links)

  • Logistic regression and the perceptron
  • Feed-forward neural networks
  • Training and issues: gradient descent / back-propagation, Hessian-free optimization, gradient diffusion, rectified linear units
  • Function counting theorem
  • Regularization strategies: convolutional neural networks, weight decay, etc.
  • Unsupervised pretraining, auto-encoders, denoising auto-encoders
  • Model averaging: drop Connect
  • Tweaks and their rationale: distortions, foveation, rectified linear units
  • Deconvolutional Neural Networks
  • Sum-product networks (1,2,3)
  • Restricted Boltzmann machines
  • Recursive neural networks
  • Long short term memory
  • Interpreting deep neural networks
  • Dreaming neural networks


Last update: 17.08.2015, 12:37
zum Seitenanfang