Difference between revisions of "DSC Capstone2020"
From Immersive Visualization Lab Wiki
(→Schedule) |
(→Schedule) |
||
Line 42: | Line 42: | ||
| Oct 28 | | Oct 28 | ||
| | | | ||
− | | Replication Checkpoint #1 due | + | | [[DSC180F20W4|Replication Checkpoint #1]] due |
|- | |- | ||
| 5 | | 5 |
Revision as of 21:36, 7 October 2020
Contents |
DSC 180 Capstone Section A01: Explainable AI
- Instructor: Jurgen Schulze
- Discussion: Wednesdays 12-12:50pm on Zoom at https://ucsd.zoom.us/j/9100475160
- Piazza Discussion Board: https://piazza.com/ucsd/fall2020/dsc180
Overview
In this capstone domain we are going to study how we can make machine learning systems more user friendly by exploiting additional knowledge we can derive from the system and present it to the user. These types of systems are called Explainable AI.
The example we are going to use in this class is object recognition in images. We are first going to learn about saliency and attention maps, then get to know a large publicly available image data set (COCO) and finally we are going to implement the Grad-CAM algorithm in PyTorch and apply it to the COCO image data set.
The quarter will end with a proposal for your capstone project, which you will be working on in the winter quarter.
This class will be entirely remote.
Schedule
Week | Date | Discussion Topic | Homework Tasks |
---|---|---|---|
1 | Oct 7 | Overview | None for week 1 |
2 | Oct 14 | Saliency Maps | Literature review |
3 | Oct 21 | Image Data Set | Description of data |
4 | Oct 28 | Replication Checkpoint #1 due | |
5 | Nov 4 | Description of methods | |
6 | Nov 11 | Veterans Day (No Discussion) | Implementation of methods |
7 | Nov 18 | Replication Checkpoint #2 due | |
8 | Nov 25 | Implementation of Grad-CAM | |
9 | Dec 2 | Implementation of Grad-CAM | |
10 | Dec 9 | Final Replication Report due, Capstone Project proposal due |
Papers
- Paper for replication: Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization
- Paper on COCO image dataset: Microsoft COCO: Common Objects in Context
- Original attention map paper: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
- Learning Deep Features for Discriminative Localization
- Visualizing CNNs with deconvolution
Useful Links
- Main DSC Capstone Website
- Introduction to how CNNs Work
- Tensorflow Playground
- Grad-CAM implementation in Pytorch
- COCO Image Dataset
- http://cs-people.bu.edu/jmzhang/excitationbp.html
Direct CNN Visualization
- https://github.com/conan7882/CNN-Visualization
- https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4
- drawNet: http://people.csail.mit.edu/torralba/research/drawCNN/drawNet.html
- https://towardsdatascience.com/how-to-visualize-convolutional-features-in-40-lines-of-code-70b7d87b0030
- https://towardsdatascience.com/understanding-your-convolution-network-with-visualizations-a4883441533b
- http://cs231n.github.io/understanding-cnn/