alessandro-maccario / DeepLearning-FoodRecognition

Project on Food Recognition for the course Foundations of Deep Learning of University of Milano-Bicocca
Other
0 stars 0 forks source link

[REFERENCES] #4

Closed alessandro-maccario closed 1 year ago

alessandro-maccario commented 1 year ago

References

IMAGE DATA GENERATOR:

Visualization CNN:

For the report:

Types of creation of a model in keras

USE THIS:

CONVERT IMAGEDATAGENERATOR AS NUMPY ARRAY AND FEED THEM INSIDE GRIDSEARCHCV AS X AND Y:

FOLLOWING THEN:

CHECK:

CONVERT IMAGES TO ARRAYS

Cerca online anche altre architetture di CNN differenti e aumenta deepness dell'architettura per arrivare almeno a 10 layers!

Altro tipo di architettura:

Try:

Instead of Max pooling, you can also use fractional pooling:

https://stackoverflow.com/questions/44991470/using-tensorflow-layers-in-keras

Check the paper: Efficient Processing DNN a tutorial and survery, page 10 for reference.

GRIDSEARCHCROSS VALIDATION:

MobileNetV2

USA QUESTO COME COMPARAZIONE:

QUALE ARCHITETTURA?

MOBILENET OR RESNET? APPLICALE ENTRAMBE E COMPARA RISULTATI SU ALMENO 50 EPOCHE!

https://towardsdatascience.com/transfer-learning-using-mobilenet-and-keras-c75daf7ff299

Comparazione MOBILENETV2 e RESNET50:

Learning rate https://machinelearningmastery.com/understand-the-dynamics-of-learning-rate-on-deep-learning-neural-networks/

The amount that the weights are updated during training is referred to as the step size or the “learning rate.”

Specifically, the learning rate is a configurable hyperparameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0.

The learning rate controls how quickly the model is adapted to the problem. Smaller learning rates require more training epochs given the smaller changes made to the weights each update, whereas larger learning rates result in rapid changes and require fewer training epochs.

A learning rate that is too large can cause the model to converge too quickly to a suboptimal solution, whereas a learning rate that is too small can cause the process to get stuck.

The challenge of training deep learning neural networks involves carefully selecting the learning rate. It may be the most important hyperparameter for the model.

USEFUL LINKS:

BEST:

https://towardsdatascience.com/step-by-step-vgg16-implementation-in-keras-for-beginners-a833c686ae6c

https://machinelearningmastery.com/use-pre-trained-vgg-model-classify-objects-photographs/

https://www.learndatasci.com/tutorials/hands-on-transfer-learning-keras/

TO CHECK: