vturrisi / solo-learn

solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
MIT License
1.41k stars 182 forks source link

Error due to val_loader is None #245

Closed ramdhan1989 closed 2 years ago

ramdhan1989 commented 2 years ago

Hi there! this is awesome library and the most complete so far. I would like to train custom dataset. I have both unlabeled and labeled dataset. The labeled dataset is divided into train and validation where the name of folder inside is the class name. I use windows with one gpu. I tried to install dali but it ended up with an error as follow : ERROR: No matching distribution found for nvidia-dali-cuda110 do you have any idea to solve this?

Then I tried not to use dali by using command below: python main_pretrain.py --backbone resnet18 --data_dir D:/Ramdhan/dynocard/dyno_image_floodfill_after_inv_corr --train_dir dyno_image_floodfill_after_inv_corr --name barlow-custom --project self --method barlow_twins --no_labels --dataset custom --brightness 0.4 --contrast 0.4 --saturation 0.2 --hue 0.1 --optimizer sgd --devices 0 --accelerator gpu --auto_select_gpus --scheduler warmup_cosine --max_epochs 50

Then, it gave an error below. I think the val_loader is None as a result of this line https://github.com/vturrisi/solo-learn/blob/main/main_pretrain.py#:~:text=if%20args.dataset,val_loader%20%3D%20None

Traceback (most recent call last):
  File "main_pretrain.py", line 205, in <module>
    main()
  File "main_pretrain.py", line 201, in main
    trainer.fit(model, train_loader, val_loader, ckpt_path=ckpt_path)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 740, in fit
    self._call_and_handle_interrupt(
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1199, in _run
    self._dispatch()
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1279, in _dispatch
    self.training_type_plugin.start_training(self)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 202, in start_training
    self._results = trainer.run_stage()
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1289, in run_stage
    return self._run_train()
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1319, in _run_train
    self.fit_loop.run()
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\loops\base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 234, in advance
    self.epoch_loop.run(data_fetcher)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\loops\base.py", line 140, in run
    self.on_run_start(*args, **kwargs)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 141, in on_run_start
    self._dataloader_iter = _update_dataloader_iter(data_fetcher, self.batch_idx + 1)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\loops\utilities.py", line 121, in _update_dataloader_iter
    dataloader_iter = enumerate(data_fetcher, batch_idx)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 198, in __iter__
    self._apply_patch()
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 133, in _apply_patch
    apply_to_collections(self.loaders, self.loader_iters, (Iterator, DataLoader), _apply_patch_fn)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 181, in loader_iters
    loader_iters = self.dataloader_iter.loader_iters
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 537, in loader_iters
    self._loader_iters = self.create_loader_iters(self.loaders)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 577, in create_loader_iters
    return apply_to_collection(loaders, Iterable, iter, wrong_dtype=(Sequence, Mapping))
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 96, in apply_to_collection
    return function(data, *args, **kwargs)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__
    w.start()
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'dataset_with_index.<locals>.DatasetWithIndex'
Epoch 0:   0%|          | 0/5169 [00:00<?, ?it/s]

(SiT) D:\Ramdhan\dynocard\solo-learn-main>Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Users\Owner\Anaconda3\envs\SiT\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

please advise thank you

vturrisi commented 2 years ago

Hey @ramdhan1989 thanks for your comments abou the library, really appreciate it.

About Dali, I'm almost sure it's not supported on windows. I would suggest you to check their repo just to confirm, but maybe it's in their roadmap for the future.

About the second part, I'll look into it in the next few days and back to you. I'm writing just to let you know that I saw your issue.

ramdhan1989 commented 2 years ago

Thank you for your help. I tried using cifar10 dataset but ended up with the same error message.

vturrisi commented 2 years ago

Can you share the command you ran for cifar10? I think you are just missing the val_dir parameter.

ramdhan1989 commented 2 years ago

this is my command python main_pretrain.py --dataset cifar10 --backbone resnet18 --data_dir ./datasets --num_workers 4 --precision 16 --optimizer sgd --lars --grad_clip_lars --eta_lars 0.02 --exclude_bias_n_norm --scheduler warmup_cosine --lr 0.3 --weight_decay 1e-4 --batch_size 256 --brightness 0.4 --contrast 0.4 --saturation 0.2 --hue 0.1 --gaussian_prob 0.0 --solarization_prob 0.0 --name barlow-cifar10 --project self-superivsed --wandb --save_checkpoint --method barlow_twins --proj_hidden_dim 2048 --scale_loss 0.1 --devices 0 --accelerator gpu --max_epochs 1000

vturrisi commented 2 years ago

Sorry about the delay. I tried to run your command and it worked just fine. There is some stuff missing, e.g., the crop_size (for cifar) which will make images super large for cifar, but apart from that, I got no errors. Did you try to update the repo?

vturrisi commented 2 years ago

Closing because I didn't manage to reproduce. Feel free to reopen if you have any extra info.