I am trying to run this repo with the same data as presented in the paper in order to gain a baseline understanding of this diffusion model. However, I am having trouble setting up the datasets. For the datasets:
Are these the links for the training and test sets?
There are a lot of .tar files within the Testing dataset (if that link is correct). Did you decompress the files, open the individual datasets and save them as a .npy file?
Will the files have to be read and saved to .npy files within these directories?
Do you have a detailed folder structure and file format we need to use in order to get your code working?
Yeah exactly those however I think the testing data is now closed or locked as others have since reported.
Again, yes exactly, it was more efficient to open the file with the library in python (can't remember the name of it, I must import it in my datasets.py file) and then save the processed images as npy files.
If you want to handle this the same as I did, I wrote a directory scraper to loop over all the files and save them - although don't believe I uploaded that with this repository.
I've uploaded a screenshot of a minimal case. The datasets.py file should loop through the contents of those directories as the training, test and anomalous data sources.
If you have any further questions, please let me know
Hello,
I am trying to run this repo with the same data as presented in the paper in order to gain a baseline understanding of this diffusion model. However, I am having trouble setting up the datasets. For the datasets:
There are a lot of .tar files within the Testing dataset (if that link is correct). Did you decompress the files, open the individual datasets and save them as a .npy file?
Will the files have to be read and saved to .npy files within these directories?
Do you have a detailed folder structure and file format we need to use in order to get your code working?