Skip to main content

Tez: Deep Learning Approaches to Problems in Speech Recognition, Computational Chemistry, and Natural Language Text Processing


Info
George E. Dahl
Ph.D. Thesis
2015
University of Toronto

The deep learning approach to machine learning emphasizes high-capacity, scalable models that learn distributed  representations  of  their  input. This  dissertation  demonstrates  the  ecacy  and  generality of this approach in a series of diverse case studies in speech recognition, computational chemistry, and natural language processing.  Throughout these studies, I extend and modify the neural network models as needed to be more e ective for each task.
In  the  area  of  speech  recognition,  I  develop  a  more  accurate  acoustic  model  using  a  deep  neural network.  This model, which uses recti ed linear units and dropout, improves word error rates on a 50 hour broadcast news task.  A similar neural network results in a model for molecular activity prediction substantially more e ective than production systems used in the pharmaceutical industry.  Even though training assays in drug discovery are not typically very large, it is still possible to train very large models by leveraging data from multiple assays in the same model and by using e ective regularization schemes. In the area of natural language processing, I first describe a new restricted Boltzmann machine training algorithm suitable for text data.  Then, I introduce a new neural network generative model of parsed sentences capable of generating reasonable samples and demonstrate a performance advantage for deeper variants of the model.

Devamını Oku

Makale: Anticipating the Future by Watching Unlabeled Video

In many computer vision applications, machines will need to reason beyond the present, and predict the future. This task is challenging because it requires leveraging extensive commonsense knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently obtaining this knowledge is through the massive amounts of readily available unlabeled video. In this paper, we present a large scale framework that capitalizes on temporal structure in unlabeled video to learn to anticipate both actions and objects in the future. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. We experimentally validate this idea on two challenging “in the wild” video datasets, and our results suggest that learning with unlabeled videos significantly helps forecast actions and anticipate objects.

Makale: Visualizing Object Detection Features

We introduce algorithms to visualize feature spaces used by object detectors. The tools in this paper allow a human to put on “HOG goggles” and perceive the visual world as a HOG based object detector sees it. We found that these visualizations allow us to analyze object detection systems in new ways and gain new insight into the detector’s failures. For example, when we visualize the features for high scoring false alarms, we discovered that, although they are clearly wrong in image space, they do look deceptively similar to true positives in feature space. This result suggests that many of these false alarms are caused by our choice of feature space, and indicates that creating a better learning algorithm or building bigger datasets is unlikely to correct these errors. By visualizing feature spaces, we can gain a more intuitive understanding of our detection systems.

Makale: Pylearn2: a machine learning research library

Pylearn2 is a machine learning research library. This does not just mean that it is a collection of machine learning algorithms that share a common API; it means that it has been designed for flexibility and extensibility in order to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summary of the library’s architecture, and a description of how the Pylearn2 community functions socially.