Open pooyasa opened 3 years ago
Hi,
Theoretically, it is possible if you understand the data structure of the mini-imagenet dataset and prepare your data in a mini-imagenet-like format. However, the performance of this experiment is unknown since we didn't try one before. If you encounter any problem, feel free to contact us.
Yours,
DPGN team
Hi,
Thank you for your reply, I managed to feed an ImageFolder to the model and it works perfectly. I was able to achieve 94% of accuracy after 11 hours of training on my dataset. However, my images were 224 224 pixels and I had to crop them to 100 100 pixels and decrease the batch size to 20 as they wouldn't fit in Google Colabs 16 GB GPU. In case of 224 224 pixels, I had to decrease the batch size to 5 and the network suffered heavily from over fitting. I want to increase the batch size and image sizes because I believe it has a positive effect on the models output. I was wondering, what was your environment and your hardware specs? In case of 100 100 pixels and batch size of 20, GPU usage was around 14.5 GB with convnet backbone. Is it possible to run the model on two GPUs (RTX 2080 Ti) with each GPU having capacity of 11 GBs? I attached the learning curves of the both cases, one with 100 pixels and batch size of 20 which works great but I want to increase the image sizes and one with 224 pixel images and batch size of 5 which suffers from over fitting of the training data.
Best regards,
Pooya
Hi,
Thank you for your reply, I managed to feed an ImageFolder to the model and it works perfectly. I was able to achieve 94% of accuracy after 11 hours of training on my dataset. However, my images were 224 224 pixels and I had to crop them to 100 100 pixels and decrease the batch size to 20 as they wouldn't fit in Google Colabs 16 GB GPU. In case of 224 224 pixels, I had to decrease the batch size to 5 and the network suffered heavily from over fitting. I want to increase the batch size and image sizes because I believe it has a positive effect on the models output. I was wondering, what was your environment and your hardware specs? In case of 100 100 pixels and batch size of 20, GPU usage was around 14.5 GB with convnet backbone. Is it possible to run the model on two GPUs (RTX 2080 Ti) with each GPU having capacity of 11 GBs? I attached the learning curves of the both cases, one with 100 pixels and batch size of 20 which works great but I want to increase the image sizes and one with 224 pixel images and batch size of 5 which suffers from over fitting of the training data.
Best regards,
Pooya
Hi Pooya,
We used single 2080ti for 1shot experiment on ResNet12 and ConvNet. We used V100 or multiple 2080ti for others.
Try pytorch-memonger, we used it in our codebase before.
Hi, I wanted to check this model out and test it for a dateset of images that I have. Is that currently possible? Regards.