dqshuai / MetaFormer

A PyTorch implementation of "MetaFormer: A Unified Meta Framework for Fine-Grained Recognition". A reference PyTorch implementation of “CoAtNet: Marrying Convolution and Attention for All Data Sizes”
MIT License
227 stars 40 forks source link

Danish Fungi 2020 - Performance Evaluation #2

Closed picekl closed 2 years ago

picekl commented 2 years ago

Dear All,

This is more a feature request than a bug; anyway, can you test/utilize your method on the DF20 dataset? We include much more metadata within the dataset, thus, the performance evaluation with regular ViT architecture might is interesting.

The link to the paper and web follows:

Best, Lukas

dqshuai commented 2 years ago

I will try it!thanks. But, i can't open the web(https://sites.google.com/view/danish-fungi-datase),what should i do!

picekl commented 2 years ago

@dqshuai My bad! Missing t at the end! :)

Here is the correct link --> https://sites.google.com/view/danish-fungi-dataset

If you will need anything, please let me know via email.

PS: We are hosting a Kaggle competition. Might be a good idea to submit it. Looks like a winning method! 👍 https://www.kaggle.com/c/fungiclef2022

Best, Lukas

iFighting commented 2 years ago

Dear All,

This is more a feature request than a bug; anyway, can you test/utilize your method on the DF20 dataset? We include much more metadata within the dataset, thus, the performance evaluation with regular ViT architecture might is interesting.

The link to the paper and web follows:

Best, Lukas

Thanks for your attention to our work,we will try it. We also release our pretrain models, in case you want to try on other datasets

dqshuai commented 2 years ago

@picekl hi,I find the redirected url of "[Test Images][(max side size 300px)][~2.5Gb]" is same as [Test Images][~50Gb]. The file size is both 50gb. I'm guessing whether one of the links is wrong.

picekl commented 2 years ago

@dqshuai Good Point! It's fixed now.

FYI - The labels for the FungeCLEF2022 Test Data won't be available for a while. I would recommend you to split the train into (train/val) -- if you want to do some hyperparameter tuning -- and test on the validation set, which is more a public test set. Similar to validation on ImageNet-1k.