Difference between revisions of "DSC180F20W4"
From Immersive Visualization Lab Wiki
(Created page with "=Replication Checkpoint #1= * Report on COCO data set: structure, format, contents, etc. * Code to load COCO data set into your own custom application") |
(→Replication Checkpoint 1 (due October 30th)) |
||
(5 intermediate revisions by one user not shown) | |||
Line 1: | Line 1: | ||
− | = | + | ==Participation Assignment (due October 27th at noon)== |
− | * | + | Watch [https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch the video and read the detailed explanation of the COCO dataset] |
− | * | + | |
+ | Answer these questions: | ||
+ | |||
+ | * What does your COCO demo do differently than the default demo? | ||
+ | * Show a sample output of your demo (can be a mock-up of what you want it to be) | ||
+ | |||
+ | ==Replication Checkpoint 1 (due October 30th at 11:59pm)== | ||
+ | |||
+ | ===Report Portion=== | ||
+ | |||
+ | Write an introduction including: | ||
+ | |||
+ | * An explanation of the problem being investigated: how can an image classification algorithm be trusted to give the correct results? | ||
+ | * A brief explanation of the context of the problem and why it’s interesting. | ||
+ | * A description of the type of data for which the method is appropriate: basic description of observed data used in the investigation (COCO dataset) and why it’s appropriate for addressing the problem. | ||
+ | |||
+ | ===Code Portion=== | ||
+ | |||
+ | * Modify the COCO demo to create your own that is different than the default demo. | ||
+ | * Your code should be turned in via GitHub. It should: | ||
+ | ** conform to the template structure discussed in lecture, | ||
+ | ** contain a rudimentary data ingestion pipeline, | ||
+ | ** include documentation both in your README.md, describing the purpose of the code, its contents, and how to run it. | ||
+ | ** be runnable via the command python run.py data. Include a data-params.json file in the config directory, which specifies any data-input locations. | ||
+ | ** include the COCO dataset location in your data-params.json | ||
+ | |||
+ | For both the Report and Code Portion, write in a Canvas comment listing what tasks each group member was responsible for. |
Latest revision as of 19:41, 28 October 2020
Contents |
Participation Assignment (due October 27th at noon)
Watch the video and read the detailed explanation of the COCO dataset
Answer these questions:
- What does your COCO demo do differently than the default demo?
- Show a sample output of your demo (can be a mock-up of what you want it to be)
Replication Checkpoint 1 (due October 30th at 11:59pm)
Report Portion
Write an introduction including:
- An explanation of the problem being investigated: how can an image classification algorithm be trusted to give the correct results?
- A brief explanation of the context of the problem and why it’s interesting.
- A description of the type of data for which the method is appropriate: basic description of observed data used in the investigation (COCO dataset) and why it’s appropriate for addressing the problem.
Code Portion
- Modify the COCO demo to create your own that is different than the default demo.
- Your code should be turned in via GitHub. It should:
- conform to the template structure discussed in lecture,
- contain a rudimentary data ingestion pipeline,
- include documentation both in your README.md, describing the purpose of the code, its contents, and how to run it.
- be runnable via the command python run.py data. Include a data-params.json file in the config directory, which specifies any data-input locations.
- include the COCO dataset location in your data-params.json
For both the Report and Code Portion, write in a Canvas comment listing what tasks each group member was responsible for.