# Information for Topic Presentations

You can present in a group of up to 5 students, or by yourself. I strongly encourage you to present in a group. The length should be about 15-20 minutes followed by questions. A single class meeting can accommodate 4 presentations.

## Presentation Structure

A presentation should address the following questions.

- Motivation: what is the problem the method seeks to address? Why is this an important question to answer?
- Intuition: High-level description of the method/approach. As informal as possible.
- Details: Pseudo-code, instructions on how to use an idea.
- Examples: An example application. Demos and visualizations are helpful if available.

## Advanced Topics for an Optional Presentation

Below you find my suggestions for topics. I selected topics that are likely to be useful to a number of students. If you want to suggest a topic of your own, please discuss your choice of presentation topic with me as early as possible.

I provide basic references. You can use others in addition for your presentation if you choose.

- Representational Power of Feedforward Networks
- Prediction with structured inputs:
**attention models**. Attention models assign weights to different regions of the input to highlight the most important regions for a task.- Explanation of Attention Models. A clear explanation of what attention models are and how to use them. Not many details on how to train them.
- Attention Models and Module Neural Networks. This paper is about many things, but it includes a clear explanation of how an attention model can be used for both vision and language tasks.

- Prediction with structured outputs:
**generative adversarial models**.- Wikipedia Page. The basic idea is to build two neural networks: A generator that generates an output (e.g. image), and an adversarial classifier that labels inputs (e.g. images) as 'real' or 'generated'. The generator network is trained to minimize the discrimination accuracy of its adversary.

- Models for
**graph-structured or network data**.- Geometric Deep Learning: Going beyond Euclidean data. A nice survey, quite comprehensive for only 24 pages. The level of math is sophisticated. You need to sign in with SFU library to access it.
- Graph Convolutional Networks. A nice place to start with great figures. The basic idea behind graph convolution is to find latent labels for nodes that are regularized so that the label of a node is similar to the label of its neighbours.
- Computing Embeddings for Heterogeneous Networks. A common approach is spectral analysis, mapping a node in a graph to a vector that should identify its type.
- Relational inductive biases, deep learning, and graph networks. Another recent survey paper that has received a fair amount of attention. The part that is interest to us starts at section 3. Presents a general framework, not so much specific methods.

- Giving advice to neural nets: learning with
**domain knowledge**.- Knowledge-based Artificial Neural Networks (KBANN).
**Summary Slides.** - Seminal paper. A good introduction, good starting point for finding more recent references.

- Knowledge-based Artificial Neural Networks (KBANN).
- More recurrent architectures
**Evaluating**a deep learning system- Practical Advice on managing machine learning experiments.
- Troubling Trends in Machine Learning.
- On the State of the Art of Evaluation in Neural Language Models. A nice case study about separating the impact of network architecture from choice of hyper parameters. See also Section 3.2. here.

- Learning
**Hyperparameters**.- A recent blog post with comments and further references. Opinionated, therefore fun.
- See also the Stanford course notes and the references in Duda and Hart from the main course page.

- How robust are neural nets? They can be fooled by small changes in the input.
**Software engineering**issues.- Machine Learning: the high-interest credit card of technical debt. Discusses the challenges in using, maintaining and updating a machine learning system on an ongoing basis.

**Interpreting**Neural Networks. This is a big topic, you probably want do some literature search of your own. Here are some suggestions.- Interpretable Deep Models for ICU Outcome Prediction A concrete example of interpreting the knowledge contained in network weights.
- Extracting decision trees from trained neural networks. A classic paper on extracting tree-based rules. This is still an area of on-going research.
- The Mythos of Model Interpretability. A recent reflection on the concept of interpretability. Also provides a kind of survey.

- Learning Neural Network
**Structure**- The Cascade-Correlation Algorithm for learning neural network structure. Other structure learning methods include Optimal Brain Damage (start from fully connected network, remove edges) and Tiling (start with single unit, add others).

Updated Thu Jan. 24 2019, 21:15 by oschulte.