Skip to main content

Derin Öğrenme Yaz Okulu 2015

Derin Öğrenme Yaz Okulu Montreal/Kanada’da Ağustos 2015 ayında icra edildi. 10 günlük faaliyette derin öğrenmenin kullanım alanlarına yönelik konusunda uzman kişilerin katıldığı sunumlar ve otonom sistem demoları yapıldı. Aşağıda  günlük programlar halinde sunulan sunumları indirip inceleyebilirsiniz.

Gelecek yaz döneminde benzer bir faaliyeti ülkemizde yapma konusunda şimdiden  hazırlıklara başladık. Değerli katkılarınızı bekliyoruz.

1’inci Gün – 03 Ağustos 2015
Pascal Vincent: Intro to ML
Yoshua Bengio: Theoretical motivations for Representation Learning & Deep Learning
Leon Bottou: Intro to multi-layer nets

2’nci Gün – 04 Ağustos 2015
Hugo Larochelle: Neural nets and backprop
Leon Bottou: Numerical optimization and SGD, Structured problems & reasoning
Hugo Larochelle: Directed Graphical Models and NADE
Intro to Theano

3’üncü Gün – 05 Ağustos 2015
Aaron Courville: Intro to undirected graphical models
Honglak Lee: Stacks of RBMs
Pascal Vincent: Denoising and contractive auto-encoders, manifold view

4’üncü Gün – 06 Ağustos 2015
Roland Memisevic: Visual features
Honglak Lee: Convolutional networks
Graham Taylor: Learning similarit

5’inci Gün – 07 Ağustos 2015
Chris Manning: NLP 101
Graham Taylor: Modeling human motion, pose estimation and tracking
Chris Manning: NLP / Deep Learning

6’ncı Gün – 08 Ağustos 2015
Ruslan Salakhutdinov: Deep Boltzmann Machines
Adam Coates: Speech recognition with deep learning
Ruslan Salakhutdinov: Multi-modal models

7’nci Gün – 09 Ağustos 2015
Ian Goodfellow: Structure of optimization problems
Adam Coates: Systems issues and distributed training
Ian Goodfellow: Adversarial examples

8’inci Gün – 10 Ağustos 2015
Phil Blunsom: From language modeling to machine translation
Richard Socher: Recurrent neural networks
Phil Blunsom: Memory, Reading, and Comprehension

9’uncu Gün – 11 Ağustos 2015
Richard Socher: DMN for NLP
Mark Schmidt: Smooth, Finite, and Convex Optimization
Roland Memisevic: Visual Features II

10’uncu Gün – 12 Ağustos 2015
Mark Schmidt: Non-Smooth, Non-Finite, and Non-Convex Optimization
Aaron Courville: VAEs and deep generative models for vision
Yoshua Bengio: Generative models from auto-encoder

Tüm sunumları indirmek için tıklayınız.


Tez: Recursive Deep Learning for Natural Language Processing and Computer Vision

Richard Socher
Ph.D. Thesis
Stanford University

As the amount of unstructured text data that humanity produces overall and on the Internet grows, so does the need to intelligently process it and extract different types of knowledge from it. My research goal in this thesis is to develop learning models that can automatically induce representations of human language, in particular its structure and meaning in order to solve multiple higher level language tasks.
There has been great progress in delivering technologies in natural language processing such as extracting information, sentiment analysis or grammatical analysis. However, solutions are often based on different machine learning models. My goal is the development of general and scalable algorithms that can jointly solve such tasks and learn the necessary intermediate representations of the linguistic units involved. Furthermore, most standard approaches make strong simplifying language assumptions and require well designed feature representations. The models in this thesis address these two shortcomings. They provide effective and general representations for sentences without assuming word order independence. Furthermore, they provide state of the art performance with no, or few manually designed features.

Devamını Oku

Doğal Dil İşleme için Derin Öğrenme

Machine learning is everywhere in today’s NLP, but by and large machine learning amounts to numerical optimization of weights for human designed representations and features. The goal of deep learning is to explore how computers can take advantage of data to develop features and representations appropriate for complex interpretation tasks. This tutorial aims to cover the basic motivation, ideas, models and learning algorithms in deep learning for natural language processing. Recently, these methods have been shown to perform very well on various NLP tasks such as language modeling, POS tagging, named entity recognition, sentiment analysis and paraphrase detection, among others. The most attractive quality of these techniques is that they can perform well without any external hand-designed resources or time-intensive feature engineering. Despite these advantages, many researchers in NLP are not familiar with these methods. Our focus is on insight and understanding, using graphical illustrations and simple, intuitive derivations. The goal of the tutorial is to make the inner workings of these techniques transparent, intuitive and their results interpretable, rather than black boxes labeled “magic here”. The first part of the tutorial presents the basics of neural networks, neural word vectors, several simple models based on local windows and the math and algorithms of training via backpropagation. In this section applications include language modeling and POS tagging. In the second section we present recursive neural networks which can learn structured tree outputs as well as vector representations for phrases and sentences. We cover both equations as well as applications. We show how training can be achieved by a modified version of the backpropagation algorithm introduced before. These modifications allow the algorithm to work on tree structures. Applications include sentiment analysis and paraphrase detection. We also draw connections to recent work in semantic compositionality in vector spaces. The principle goal, again, is to make these methods appear intuitive and interpretable rather than mathematically confusing. By this point in the tutorial, the audience members should have a clear understanding of how to build a deep learning system for word-, sentence- and document-level tasks. The last part of the tutorial gives a general overview of the different applications of deep learning in NLP, including bag of words models. We will provide a discussion of NLP-oriented issues in modeling, interpretation, representational power, and optimization.

Birinci Bölüm

İkinci Bölüm:

Sunum Materyali: