Closed ameliepeter closed 3 years ago
Review done @kenghooi-teoh pretty clear explanations and comments
@KianYang-Lee here's a few changes that I suggest to improve the explanation of tasks and parts of the code:
In this exercise, we will try to do a simple hands-on exercise to assess the relationship between CNN's complexity and the achieved model performance for a given dataset. You may use a different image dataset than the 4-class Weather Image dataset provided (Cloudy/Rain/Shine/Sunrise) in the WeatherDataSetIterator.
To make your comparison meaningful:
- Try models with different architectures e.g. VGG16, VGG19, SqueezeNet with noticeable difference in terms of model size.
- Train each of the 3 models on a common set of hyperparameters e.g. learning rate, updater, batch size etc. and image preprocessing methods as well as the dataset splits.
- Keep a record of evaluation metrics among these 3 models and compare their performance. The comparison should highlight how the differences in model architectures (e.g. number of layers, use of skip connections, use of Batch-norm etc.) affect the achieved scores, and you may also include other points of discussion that reasonably explains the disparity/similarity of model performance for the given dataset.
In line 33, Amelie mentioned that dataset is stored in "resources" folder, but the iterator file downloads the dataset into the data folder in home directory instead of resources folder.
To make the code a little cleaner in the Main class, you may wrap the 3 chunks of Model configuration code into a getModelConfig() method
An example of comparison can be provided in the solution folder, as a txt file or other file formats that is readable
@KianYang-Lee here's a few changes that I suggest to improve the explanation of tasks and parts of the code:
- Change the exercise instructions to the following:
In this exercise, we will try to do a simple hands-on exercise to assess the relationship between CNN's complexity and the achieved model performance for a given dataset. You may use a different image dataset than the 4-class Weather Image dataset provided (Cloudy/Rain/Shine/Sunrise) in the WeatherDataSetIterator. To make your comparison meaningful:
- Try models with different architectures e.g. VGG16, VGG19, SqueezeNet with noticeable difference in terms of model size.
- Train each of the 3 models on a common set of hyperparameters e.g. learning rate, updater, batch size etc. and image preprocessing methods as well as the dataset splits.
- Keep a record of evaluation metrics among these 3 models and compare their performance. The comparison should highlight how the differences in model architectures (e.g. number of layers, use of skip connections, use of Batch-norm etc.) affect the achieved scores, and you may also include other points of discussion that reasonably explains the disparity/similarity of model performance for the given dataset.
- In line 33, Amelie mentioned that dataset is stored in "resources" folder, but the iterator file downloads the dataset into the data folder in home directory instead of resources folder.
- To make the code a little cleaner in the Main class, you may wrap the 3 chunks of Model configuration code into a getModelConfig() method
- An example of comparison can be provided in the solution folder, as a txt file or other file formats that is readable
I will work on this later, thanks
@KianYang-Lee pushed some code to get you started in the solutions folder, let's just work on enhancing e.g. example of explaining the question we asked in the instruction, loss monitoring etc.
OK will work on the enhancement by the end of this week
@kenghooi-teoh feel free to review added tracking loss using UI
Oops fixed it
Description
Added a new image classification exercise involving pre-trained models.
Fixes # (issue)
Tested on
Assign a reviewer to review your code.