Skip to main content

Derin Öğrenme Yaz Okulu 2015

Derin Öğrenme Yaz Okulu Montreal/Kanada’da Ağustos 2015 ayında icra edildi. 10 günlük faaliyette derin öğrenmenin kullanım alanlarına yönelik konusunda uzman kişilerin katıldığı sunumlar ve otonom sistem demoları yapıldı. Aşağıda  günlük programlar halinde sunulan sunumları indirip inceleyebilirsiniz.

Gelecek yaz döneminde benzer bir faaliyeti ülkemizde yapma konusunda şimdiden  hazırlıklara başladık. Değerli katkılarınızı bekliyoruz.


1’inci Gün – 03 Ağustos 2015
Pascal Vincent: Intro to ML
Yoshua Bengio: Theoretical motivations for Representation Learning & Deep Learning
Leon Bottou: Intro to multi-layer nets

2’nci Gün – 04 Ağustos 2015
Hugo Larochelle: Neural nets and backprop
Leon Bottou: Numerical optimization and SGD, Structured problems & reasoning
Hugo Larochelle: Directed Graphical Models and NADE
Intro to Theano

3’üncü Gün – 05 Ağustos 2015
Aaron Courville: Intro to undirected graphical models
Honglak Lee: Stacks of RBMs
Pascal Vincent: Denoising and contractive auto-encoders, manifold view

4’üncü Gün – 06 Ağustos 2015
Roland Memisevic: Visual features
Honglak Lee: Convolutional networks
Graham Taylor: Learning similarit

5’inci Gün – 07 Ağustos 2015
Chris Manning: NLP 101
Graham Taylor: Modeling human motion, pose estimation and tracking
Chris Manning: NLP / Deep Learning

6’ncı Gün – 08 Ağustos 2015
Ruslan Salakhutdinov: Deep Boltzmann Machines
Adam Coates: Speech recognition with deep learning
Ruslan Salakhutdinov: Multi-modal models

7’nci Gün – 09 Ağustos 2015
Ian Goodfellow: Structure of optimization problems
Adam Coates: Systems issues and distributed training
Ian Goodfellow: Adversarial examples

8’inci Gün – 10 Ağustos 2015
Phil Blunsom: From language modeling to machine translation
Richard Socher: Recurrent neural networks
Phil Blunsom: Memory, Reading, and Comprehension

9’uncu Gün – 11 Ağustos 2015
Richard Socher: DMN for NLP
Mark Schmidt: Smooth, Finite, and Convex Optimization
Roland Memisevic: Visual Features II

10’uncu Gün – 12 Ağustos 2015
Mark Schmidt: Non-Smooth, Non-Finite, and Non-Convex Optimization
Aaron Courville: VAEs and deep generative models for vision
Yoshua Bengio: Generative models from auto-encoder

Tüm sunumları indirmek için tıklayınız.

Kaynaklar:

https://stanfordsailors.wordpress.com

https://sites.google.com/site/deeplearningsummerschool

Makale: Dropout: A Simple Way to Prevent Neural Networks from Overfitting

Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different “thinned” networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.