Closed rayanirban closed 1 month ago
pathlib, urllib.request, zipfile and tqdm
. Perhaps it would be interesting to explain in 4-5 words what is for example tqdm
. In case they don't have experience in coding.I will continue later/tomorrow with the review of the next Chapters. Great job @rayanirban 🥳
Excellent job @rayanirban I am looking forward to help you with the exercise! i am sure the students will learn a lot and have a lot of fun!
Here is my review of the notebook. I tested this by running the solution.py
and the solution.ipynb
.
pip installing
of some packages that are used. squeeze(), unsqueeze(), arr[np.newaxis,...]
,etc). Perhaps this can be added after Task 1.5
tqdm
in that simple for loop to demonstrate it earlier in the notebook. It's a single loop vs two loops. skimage.color.label2rgb
is a cool one). It would be nice to show that the instance segmentation values map to some ID number. It could be as simple as adding the viridis colorbar.portion
Here we will optionally demosntrate TQDM..
solution.py
file, so I didn't have torchvision
. Not sure if the google collab has it pre-installed. This refers back to my first comment in chapter 0Great job @rayanirban and see you soon!
Hi @rayanirban, the notebook looks great. The students should get a good understanding on the foundations of handling data. The pacing of the tasks is really good too. There are just a few things I would change:
titles
argument to make it clearer which image is which.np.random.randint
to select batch indexes, which could give duplicate images in a batch. Students might do the same and, when they look at the solutions, not notice the difference that using random.choice
makes. There could be a hint telling them to make should their batch contains 4 unique images.See you in Woods Hole!
@afoix: Thanks! All the changes/suggestions are incorporated.
@edyoshikun: Thanks for the feedback and suggestions. Couple of things:
For this notebook, I didn't see an environment or in-line pip installing of some packages that are
Nothing need to be installed
I often times I find myself having to add/remove dimensions. (i.e squeeze(), unsqueeze(), arr[np.newaxis,...],etc). Perhaps this can be added after Task 1.5
I have added np.newaxis
. I am not adding squeeze
and unsqueeze
or any other thing here as we have not introduced torch tensors yet.
This one is super minor. Just highlighting that on Task 1.8 you could use tqdm in that simple for loop to demonstrate it earlier in the notebook. It's a single loop vs two loops.
I wanted students to focus on the filename printing and the loop is quite fast so I avoided adding tqdm
here.
I assume in this notebook we aren't using anything fancy to look at the masks (i.e skimage.color.label2rgb is a cool one). It would be nice to show that the instance segmentation values map to some ID number. It could be as simple as adding the viridis colorbar.
Yes, we are not using anything fancy. I have written a sentence about this in Task 1.8 on how to interpret the colors in the mask and have added a note in the colorbar exercise in Chapter 5.
Task 4.1. I was testing this on the solution.py file, so I didn't have torchvision. Not sure if the google collab has it pre-installed. This refers back to my first comment in chapter 0
This is not a programming task, the point is to go over to torchvision website and familiarize with the library.
Maybe the mapping of masks to values by looking at the colorbar from Chapter 1 can go here instead. The only downside is that this is optional..
See above :)
@Ben-Salmon: Thanks for the feedback. One clarification:
The visualize function could have a titles argument to make it clearer which image is which.
As we use the visualize
function in different cases, titles was not used, I think this is fine.
Once again, thank you all for your feedbacks :), can't wait to see you all in Woods Hole 👯
All issues addressed in #10
All issues addressed in #10
Request to review the notebooks in the main branch inside a colab environment.