lukasruff / Deep-SVDD-PyTorch

A PyTorch implementation of the Deep SVDD anomaly detection method
MIT License
698 stars 197 forks source link

Loading custom datasets #14

Open pavanvyn opened 5 years ago

pavanvyn commented 5 years ago

Hello, I am using you code for my project. I have a few queries -

1) My data consists of a training directory with labelled images (train --> class1, class2,.. --> images) and a testing directory of unlabelled images (test --> images). I have not been able to find a decent method to load them onto the code. Do I use a csv file to load them or is it to be done directly? 2) How exactly would I have to configure/edit my data module to be the equivalent of mnist.py or cifar10.py (datasets directory)? 3) As of now, have edited the datasets and networks (I built a custom architecture) directories. Am I right in assuming that the modules in base, utils and optim need not be edited in any way. If not, how should I go about doing it?

Any suggestion would be appreciated! Thank you.

jetjodh commented 5 years ago

@pavanvyn Did you find a way to load custom datasets?

pavanvyn commented 5 years ago

Yes, I did. I wrote a program to extract data using a csv file.

jetjodh commented 5 years ago

@pavanvyn Can you share the code?

pavanvyn commented 5 years ago

Sorry for the late reply. I used another GitHub code to do it. https://github.com/utkuozbulak/pytorch-custom-dataset-examples Use this.

JCCVW commented 4 years ago

Hello, I am using you code for my project. I have a few queries -

1. My data consists of a training directory with labelled images (train --> class1, class2,.. --> images) and a testing directory of unlabelled images (test --> images). I have not been able to find a decent method to load them onto the code. Do I use a csv file to load them or is it to be done directly?

2. How exactly would I have to configure/edit my data module to be the equivalent of mnist.py or cifar10.py (datasets directory)?

3. As of now,  have edited the datasets and networks (I built a custom architecture) directories. Am I right in assuming that the modules in base, utils and optim need not be edited in any way. If not, how should I go about doing it?

Any suggestion would be appreciated! Thank you.

Hello, can you give me some instructions on how to use my own data? I don't know how to use the link you sent. Thank you very much !

pavanvyn commented 4 years ago

Go through my github https://github.com/pavanvyn/Galaxy-classification I have used Lukas Ruff's anomaly detection program with custom datasets using csv files. My csv files for extracting data look like this -

Location Label /full/location/to/image_1.jpg 0 /full/location/to/image_2.jpg 0 /full/location/to/image_3.jpg 1 /full/location/to/image_4.jpg 2 /full/location/to/image_5.jpg 1 /full/location/to/image_6.jpg 0

Hope this helps.

omid-ghozatlou commented 3 years ago

Hello dear @pavanvyn

How do you split the train and test dataset? as far as I understood you made one dataset that is called train_test_dataset.csv

thank you

raymondlimw commented 2 years ago

@pavanvyn Can you share the code again?...

ChanganLeo commented 2 years ago

@pavanvyn Could you please share the code again? Thank you.