Deep Learning - CMPT 880 G1
Spring Semester 2014
Simon Fraser University
Instructor: Oliver Schulte
Course Logistics
Instructor: Oliver Schulte
Office Location: TASC 1 9021.
Office Phone: 778-782-3390.
Office Hours: Wednesday 3pm-4:30 pm.
E-mail Office Hour: Wednesday 4:30-5 pm.
Announcements
Course Information
What Is Deep Learning?
The one-sentence answer is that deep learning is an approach to learning neural nets with many hidden layers. "Many" means 3-4 hidden layers, in current practice. Neural nets are often good for prediction problems where the user is not certain what features should be used for predicting a class label. Deep learning places special emphasis on unsupervised learning, which does not assume that class labels are given. Hence deep learning can be viewed as performing unsupervised feature learning. Usually the results of unsupervised learning are used for supervised learning in a second "fine-tuning" phase.
For those with sufficient background in machine learning, deep learning can be explained quickly at a high level.
- My 2-slide summary
- 1 paragraph summary by Y. Bengio and Y. LeCun.
Plan for the Course
The course has three main parts.
- We review relevant background from machine learning and statistics. This includes traditional methods for unsupervised feature learning (e.g., principal component analysis, clustering). Depending on the students' background, this will take about 1/3 of the course.
- We look in detail at deep learning methods. The main models we examine are restricted Boltzmann machines and deep belief networks. We will follow the code-based deep learning tutorial. The idea is to have running code to go along with our discussion of the different models and learning methods. Depending on student interest, we can discuss general questions about deep learning, for example how it relates to other learning methods like kernel methods.
- The last third of the course is geared towards applications, trying out deep learning on real problems. We will focus on material that supports students' final projects.
Course Schedule
- Week 1: Introduction. Slides
- Week 2/3: Clustering. Clustering and EM Slides. Hierarchical Clustering Slides
- Week 4: Principal Component Analysis. Slides. IPyNotebook
- Week 5: Neural Networks. Slides
- Week 6: Deep Learning
- Python-based Tutorial Starting Page.
- Short IEEE Survey. Also one of the following.
- Deep Learning Intro Slides. A beautifully clear and concise presentation. May not be easy to follow without verbal explanations, especially if you don't have a good background in graphical models.
- Hinton's Video Tutorial. If you want to have explanations along with the slides.
- Week 7/8: Deep Learning Tutorial and Demo Presentations.
- Week 9: Deep vs. Shallow Learning.
- Scaling Learning Algorithms towards AI. An in-depth discussion by leading deep learning researchers of how deep learning relates to traditional learning models in general ("shallow learning"), and to kernel methods in particular. It's quite long at 41 pages, so this may be the only paper we read. If there is interest, we can also check out the following.
- The Equivalence of Support Vector Machine and Regularization Neural Networks.Shows a systematic translation between a support vector machine and a neural net with one hidden layer. Lots of math, concise (only 6 pages). Here's my very rough summary.
- Week 10-11: Project Outline Presentations.
- Week 12: Readings related to presentations. By student choice, we talked about convolutional neural networks. CNN IPython Notebook
- Week 13: no class. Instead we will have a long meeting with project presentations during the exam period.
Additional Readings for Presentations/Projects
I expect that most of the presentations will show a part of the code-based tutorial. If you want to go beyond this, here is a list of various papers that address additional topics. You may do your own research for what topics you may want to present. For example, you may want to match your presentation topic with your course project.
Please discuss your choice of presentation topic with me as early as possible. You can edit this page to post material relevant to your presentation.
- Learning Word Meanings
- Sentiment Analysis. Also see demo.
- Learning Scene Labellings
- Generating Generative Models: Structured Model Spaces.
- Hierarchical/multi-level Linear models. Learning with hierarchies (concepts, classes, part-whole hierarchies).
- Deep Learning of Concept Hierarchies. See also Tutorial Slides.
HomeWorks
- Background Poll
- Installing Theano. Review d-separation. Feel free to add installation notes on this page, for the benefit of your fellow students. You can also use the course discussion forum to ask questions or share advice.
- Deep Learning Reading Questions. Please post on the discussion forum two questions. One about the IEEE short survey, and one about one of the following: The Deep Learning Intro Slides, or Geoffrey Hinton's Video Tutorial (see above). These should be questions that still have after the reading, ideally suitable for class discussion. But questions of the form "I just didn't get what they meant on p.6 by the following..." are okay too. Grading is on a pass-fail basis, with some allowance for extraordinarily high or low effort. Due on Sunday Feb 16 so I can see the questions before the Monday after Reading Break.
Presentation Schedule
Information on possible topics
- Jake Bruce, Logistic Regression. Friday Feb 21. Jake's logistic regression with visualization
- Kevin Wong, Multilayer Perceptron. Monday Feb 24.
- Te Bu, Multilayer perceptron. Monday Feb 24.
- Ramtin Mehdizadeh Seraj, Auto Encoders. Wednesday Feb 26.
- Jeremy Kawahara, Stacked Denoising Auto-Encoders. Friday Feb 28. Annotated Code.
- Zhiwei Deng, Sparse Coding. Friday Feb 28.
- Donghuan Lu, Restricted Boltzmann Machine. Monday March 3.
- Amanmeet Garg, Restricted and Deep Boltzmann machine. Wednesday March 5.
- Aicha Bentaieb, Deep Belief Networks. Friday March 7.
- Mihaela Erbiceanu, Deep Belief Networks. Friday March 7.
Project Outline Presentations
- Jeremy Kawahara. Wednesday, March 19
- Kevin Wong. Friday, March 21
- Aïcha Bentaieb. Friday, March 21
- Mihaela Erbiceanu. Monday, March 24, 2014
- Jake Bruce. Wednesday, March 26, 2014
- Amanmeet Garg, Wednesday March 26, 2014. TED talk related to topic: http://www.ted.com/talks/andres_lozano_parkinson_s_depression_and_the_switch_that_might_turn_them_off
- Te Bu, Friday March 28, 2014.
- Ramtin Mehdizadeh Seraj, Friday March 28, 2014.
- Zhiwei Deng, Monday April 1, 2014
- Donghuan Lu, Monday April 1, 2014
Final Project Presentations
Thursday April 17, 10:30-2:30 pm. TASC 9204 East.
Course Projects
Resources
Books
- Pattern Recognition and Machine Learning, Chris Bishop, Springer
- Pattern Classification, Duda, Hart, and Stock, Wiley
Videos
- Andrew Ng's on-line course for Deep Learning. Incomplete.
- Andrew Ng's Machine Learning Course. Introductory Level. Covers Principal Component Analysis and Neural Nets.