Difference between revisions of "DSC Capstone2020"
From Immersive Visualization Lab Wiki
(→Schedule) |
(→Overview) |
||
Line 9: | Line 9: | ||
In this capstone domain we are going to study how we can make machine learning systems more user friendly by exploiting additional knowledge we can derive from the system and present it to the user. These types of systems are called Explainable AI. | In this capstone domain we are going to study how we can make machine learning systems more user friendly by exploiting additional knowledge we can derive from the system and present it to the user. These types of systems are called Explainable AI. | ||
− | The example we are going to use in this class is object recognition in images. We are first going to | + | The example we are going to use in this class is object recognition in images. We are first going to learn about saliency and attention maps, then get to know a large publicly available image data set (COCO) and finally we are going to implement the [http://gradcam.cloudcv.org Grad-CAM algorithm] in [https://pytorch.org PyTorch] and apply it to the [https://cocodataset.org COCO image data set]. |
The quarter will end with a proposal for your capstone project, which you will be working on in the winter quarter. | The quarter will end with a proposal for your capstone project, which you will be working on in the winter quarter. |
Revision as of 23:12, 6 October 2020
Contents |
DSC 180 Capstone Domain: Explainable AI (Section A01)
- Instructor: Jurgen Schulze
- Discussion: Wednesdays 12-12:50pm on Zoom at https://ucsd.zoom.us/j/9100475160
- Piazza Discussion Board: https://piazza.com/ucsd/fall2020/dsc180
Overview
In this capstone domain we are going to study how we can make machine learning systems more user friendly by exploiting additional knowledge we can derive from the system and present it to the user. These types of systems are called Explainable AI.
The example we are going to use in this class is object recognition in images. We are first going to learn about saliency and attention maps, then get to know a large publicly available image data set (COCO) and finally we are going to implement the Grad-CAM algorithm in PyTorch and apply it to the COCO image data set.
The quarter will end with a proposal for your capstone project, which you will be working on in the winter quarter.
This class will be entirely remote.
Schedule
Week | Date | Discussion Topic | Replication Tasks |
---|---|---|---|
1 | Oct 7 | Overview | None for week 1 |
2 | Oct 14 | Attention Maps | Literature review |
3 | Oct 21 | Image Data Set | Description of data |
4 | Oct 28 | Checkpoint #1 due | |
5 | Nov 4 | Description of methods | |
6 | Nov 11 | Veterans Day (No Discussion) | Implementation of methods |
7 | Nov 18 | Checkpoint #2 due | |
8 | Nov 25 | Implementation of Grad-CAM | |
9 | Dec 2 | Implementation of Grad-CAM | |
10 | Dec 9 | Final Report due |
Papers
- Paper for replication: Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization
- Paper on COCO image dataset: Microsoft COCO: Common Objects in Context
- Learning Deep Features for Discriminative Localization
- Visualizing CNNs with deconvolution
Useful Links
- Main DSC Capstone Website
- Introduction to how CNNs Work
- Tensorflow Playground
- Grad-CAM implementation in Pytorch
- COCO Image Dataset
- http://cs-people.bu.edu/jmzhang/excitationbp.html
Direct CNN Visualization
- https://github.com/conan7882/CNN-Visualization
- https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4
- drawNet: http://people.csail.mit.edu/torralba/research/drawCNN/drawNet.html
- https://towardsdatascience.com/how-to-visualize-convolutional-features-in-40-lines-of-code-70b7d87b0030
- https://towardsdatascience.com/understanding-your-convolution-network-with-visualizations-a4883441533b
- http://cs231n.github.io/understanding-cnn/