jiaowoguanren0615 / RetNet_ViT-RMT-

MIT License
31 stars 1 forks source link

Dataset #1

Open Rabie-H opened 10 months ago

Rabie-H commented 10 months ago

Can you please point me to the dataset used to test this model ? Thank you in advance

jiaowoguanren0615 commented 10 months ago

Hello, the data set I used is a data set of five kinds of flowers. You can download the specific data set at this URL: https://www.kaggle.com/datasets/alxmamaev/flowers-recognition. In addition, in my estimate_model.py script, the predict_single_image function. I randomly select an image,which is under the rose category. And rename it rose.jpg. If you want to achieve prediction of a single image, remember to change the file name when using this function.

1105374939 @.***

 

------------------ 原始邮件 ------------------ 发件人: "jiaowoguanren0615/RetNet_ViT-RMT-" @.>; 发送时间: 2023年10月25日(星期三) 晚上9:31 @.>; @.***>; 主题: [jiaowoguanren0615/RetNet_ViT-RMT-] Dataset (Issue #1)

Can you please point me to the dataset used to test this model ? Thank you in advance

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

Rabie-H commented 10 months ago

@jiaowoguanren0615 Thank you very much for your fast reply. Can you also share the weights if you have them?

jiaowoguanren0615 commented 10 months ago

@jiaowoguanren0615 Thank you very much for your fast reply. Can you also share the weights if you have them?

Sorry, I don’t have a trained weight file, because I think there is still a lot of places for optimization of this code, especially the code inside RMT-Block. In my personal implementation, I did a lot of tensor shape changes and permute operations, resulting in The entire training process is very time-consuming. eg: In the case of batch_size=8, using a single Nvidia A10 GPU and turning on amp, one epoch takes 25 minutes (maybe longer), but it is also an epoch training, only changing the model and other parameters (optimizer, With the learning rate and other strategies unchanged), the effect is better than ResNet34 and ResNet50(without loading ImageNet pretrained weights). I have no comparison between ResNet101 and ResNet152.