Closed MrChill closed 11 months ago
I'm not sure why you are getting 580 classes. I just reran the code from create_dvm_dataset.ipynb and got 286 classes. I thought maybe they released a new version of the dataset but I've tried it with both versions uploaded to figshare, namely from 04-13-2022 and 01-07-2023, but they both give me the same result.
If you run the notebook exactly as it is, what do you get for the length of populated_codes right after its filled (should be the 9th executable cell)?
I've also updated the code in the notebook to handle the weird column names that I had fixed manually in my data.
Thanks @paulhager for your fast reply,
I realized that they merge some car models into a joint class with "Genmodel ID" and that not all "Genmodel IDs" are represented in all tables, right?
Right, I'm interested in those where I also have the other tabular features but the main selection criteria I use is the number of images.
As to your question concerning the image order it shouldn't make a difference because everything is matched by ID.
I would highly recommend you work through the notebook, that should make everything clear
Thanks @paulhager,
Questions to dataset generation I think the notebook is a nice idea and it would be great to mention in the readme as well. However, there are some issues that occur. I managed to create the train-, val-, and test_images_all_views.pt.
But:
Questions to Dataset Use I want to use the precalculated images of train_images_all_views.pt, how do I prepare the config file exactly?
in configs/config.yaml e.g. dataset: dvm_all_server
in configs/dataset/dvm_all_server.yaml Do I have to set the path to the data in "data_orig: $PATH"? Do I have to change the data names? e.g. data_train_imaging: train_paths_all_views_scratch.pt to data_train_imaging: train_images_all_views.pt
The last cells concerning adding labels to features is for a very specific analysis I do in the paper. It was also based on custom data I got directly from the uploaders of DVM, which is why it wasn't very reproducible. I've updated it now with the new version of the data they've uploaded so it should work again. It will be irrelevant for most people though I imagine.
Exactly, set dataset: dvm_all_server
in configs/config.yaml.
No, you just need to set data_base
as explained in the ReadMe.
Ah, good catch with the _scratch
. I've fixed it now. The rest should fit if you use the notebook, but double checking never hurts.
Thanks for the support.
However, if I want to load the images from "train_images_all_views.pt" instead of "train_paths_all_views.pt" and not do live loading. How can I enable that?
If I replace "train_paths_all_views.pt" with "train_images_all_views.pt" the process gets killed, if I keep it, the data_loader yields an error since I have the raw images on another drive.
BTW: I set datatype: multimodal in config.yaml
I assume the process is getting killed because OOM issues?
You either need to have enough memory to load all images into RAM and thus can set live_loading: False
, or you need to load the images one by one by using "train_paths_all_views.pt".
Increasing the RAM by a huge amount solved the problem. Thank you for the support!
Hey, I am not sure how the dataset preparation is done. As far as I understood, the labels are the models of the car brands, e.g. Abarth_124 Spider. "Car models with less than 100 samples were removed, resulting in 286 target classes."
However, when I do this step and remove all the classes with less than 100 images, I get 580 classes, what am I missing?
Is the order of the images relevant or is it just important that image_list and label_list are in the same order?
Thank you!