Open Sakiyary opened 11 months ago
Hey. Thank you for bringing this up.
https://drive.google.com/file/d/1p4YcGdnI6DE_bKk4vB-5FWXK6OkF7pMg/view?usp=drive_link
Hopefully this one works for you.
Sorry to reopen the issue.
After I downloaded the file in its entirety, (probably because Google Drive compressed it again for me outside of the .tar.gz
to .zip
) I was unable to successfully extract the .zip
file.
I got the .zip
file imagenet84x84.tar.gz.zip
and used Bandizip to unzip it, failed.
I tried to use ZIP Extractor to unzip the .zip
file online, but it failed too.
the size of imagenet84x84.tar.gz.zip
is 6,857,580,408 bytes and its sha256sum
is c4c05a7c6a47c21dd282e350be353663802b4290177222d3d755c60105642a4e
.
May I inquire whether you can check whether the inability to unzip is due to the .zip
file being corrupted or if packets were lost during my download process? Because I need to connect to Google via VPN, and the speed is relatively slow, I am unable to pinpoint at which step the issue occurred.
Thank you for your assistance!
I spent some time completely re-downloading this .zip
file, but its sha256sum
remains the same as c4c05a7c6a47c21dd282e350be353663802b4290177222d3d755c60105642a4e
and I still cannot unzip it.
Failed when 33% too.
I believe there may be an issue with the file itself.
Since I used the Linux command file
and found that the file is identified as "bzip2 compressed data, block size = 900k", I attempted to change the file extension from .tar.gz.zip
to .tar.bz2
and used the Linux command tar -xjvf
to decompress the file. The error message is as shown in the following image:
You should be able to find the dataset here
https://huggingface.co/datasets/GATE-engine/mini_imagenet
I'll need to write some code to integrate this within this repository, so give me a day or so and it'll be done
I just pushed a new branch that refactors the data.py file (because it made my eyes bleed to look at my code from 5 years ago), and also switches to using huggingface datasets for omniglot, mini_imagenet and I also threw cub200 in there for good measure.
If you could kindly try it out and let me know if the results reproduce what's in the papers that would be great.
I'll try my best. Thank you very much!
When I run experiments with latest code using the omniglot dataset, I encounter an error.
When Using the miniImage dataset, I encounter an error too.
I am unsure if the dataset format has changed or if it is related to the version of PyTorch. Currently, I am using Python 3.7.10 and PyTorch 1.4.
Can you try using python 3.10 with the latest pytorch? That's what I used when developing this and I didn't get any errors.
I encounter another error when using the miniImage dataset:
It seems that the 'val' field is only generated in the 'datasets_dict' when using the Omniglot dataset.
Afterward, when I attempted to run experiments using the Omniglot dataset, I encountered errors similar to those before.
The current Python version is 3.10, and the PyTorch version is 2.1.1.
Alright. I believe I fixed the issues you pointed out. Please pull the latest version of https://github.com/AntreasAntoniou/HowToTrainYourMAMLPytorch/tree/refactor+switch-to-hf-datasets
mini_imagenet gdrive folder does't work anymore.
Could you please update this link so that I can download the dataset?