Not logged in. Login

Deep Learning - CMPT 880 G1

Spring Semester 2014

Simon Fraser University

Instructor: Oliver Schulte

Course Logistics

Instructor: Oliver Schulte

Office Location: TASC 1 9021.
Office Phone: 778-782-3390.
Office Hours: Wednesday 3pm-4:30 pm.
E-mail Office Hour: Wednesday 4:30-5 pm.


Announcements


Course Information

Syllabus

What Is Deep Learning?

The one-sentence answer is that deep learning is an approach to learning neural nets with many hidden layers. "Many" means 3-4 hidden layers, in current practice. Neural nets are often good for prediction problems where the user is not certain what features should be used for predicting a class label. Deep learning places special emphasis on unsupervised learning, which does not assume that class labels are given. Hence deep learning can be viewed as performing unsupervised feature learning. Usually the results of unsupervised learning are used for supervised learning in a second "fine-tuning" phase.

For those with sufficient background in machine learning, deep learning can be explained quickly at a high level.

Plan for the Course

The course has three main parts.

  1. We review relevant background from machine learning and statistics. This includes traditional methods for unsupervised feature learning (e.g., principal component analysis, clustering). Depending on the students' background, this will take about 1/3 of the course.
  2. We look in detail at deep learning methods. The main models we examine are restricted Boltzmann machines and deep belief networks. We will follow the code-based deep learning tutorial. The idea is to have running code to go along with our discussion of the different models and learning methods. Depending on student interest, we can discuss general questions about deep learning, for example how it relates to other learning methods like kernel methods.
  3. The last third of the course is geared towards applications, trying out deep learning on real problems. We will focus on material that supports students' final projects.

Course Schedule

Additional Readings for Presentations/Projects

I expect that most of the presentations will show a part of the code-based tutorial. If you want to go beyond this, here is a list of various papers that address additional topics. You may do your own research for what topics you may want to present. For example, you may want to match your presentation topic with your course project.

Please discuss your choice of presentation topic with me as early as possible. You can edit this page to post material relevant to your presentation.

HomeWorks

  1. Background Poll
  2. Installing Theano. Review d-separation. Feel free to add installation notes on this page, for the benefit of your fellow students. You can also use the course discussion forum to ask questions or share advice.
  3. Deep Learning Reading Questions. Please post on the discussion forum two questions. One about the IEEE short survey, and one about one of the following: The Deep Learning Intro Slides, or Geoffrey Hinton's Video Tutorial (see above). These should be questions that still have after the reading, ideally suitable for class discussion. But questions of the form "I just didn't get what they meant on p.6 by the following..." are okay too. Grading is on a pass-fail basis, with some allowance for extraordinarily high or low effort. Due on Sunday Feb 16 so I can see the questions before the Monday after Reading Break.

Presentation Schedule

Information on possible topics

  1. Jake Bruce, Logistic Regression. Friday Feb 21. Jake's logistic regression with visualization
  2. Kevin Wong, Multilayer Perceptron. Monday Feb 24.
  3. Te Bu, Multilayer perceptron. Monday Feb 24.
  4. Ramtin Mehdizadeh Seraj, Auto Encoders. Wednesday Feb 26.
  5. Jeremy Kawahara, Stacked Denoising Auto-Encoders. Friday Feb 28. Annotated Code.
  6. Zhiwei Deng, Sparse Coding. Friday Feb 28.
  7. Donghuan Lu, Restricted Boltzmann Machine. Monday March 3.
  8. Amanmeet Garg, Restricted and Deep Boltzmann machine. Wednesday March 5.
  9. Aicha Bentaieb, Deep Belief Networks. Friday March 7.
  10. Mihaela Erbiceanu, Deep Belief Networks. Friday March 7.

Project Outline Presentations

  1. Jeremy Kawahara. Wednesday, March 19
  2. Kevin Wong. Friday, March 21
  3. Aïcha Bentaieb. Friday, March 21
  4. Mihaela Erbiceanu. Monday, March 24, 2014
  5. Jake Bruce. Wednesday, March 26, 2014
  6. Amanmeet Garg, Wednesday March 26, 2014. TED talk related to topic: http://www.ted.com/talks/andres_lozano_parkinson_s_depression_and_the_switch_that_might_turn_them_off
  7. Te Bu, Friday March 28, 2014.
  8. Ramtin Mehdizadeh Seraj, Friday March 28, 2014.
  9. Zhiwei Deng, Monday April 1, 2014
  10. Donghuan Lu, Monday April 1, 2014

Final Project Presentations

Thursday April 17, 10:30-2:30 pm. TASC 9204 East.

Course Projects

Resources

Deep Learning Website

Books

  • Pattern Recognition and Machine Learning, Chris Bishop, Springer
  • Pattern Classification, Duda, Hart, and Stock, Wiley

Videos

Updated Tue Jan. 06 2015, 22:49 by oschulte.