Closed EnergeticChubby closed 4 months ago
I could not find the CONFIG.dataset_args['root'].
Additionally, I tried to run this project in a new env. But after I installed the requirements, it went wrong.
Thanks!
Hi!
Did you run the code by editing the example launch scripts provided in the ./launch_scripts/
folder of the repository?
CONFIG.dataset_args('root')
should point to the folder where you either downloaded the datasets / where do you want to download and store them automatically.
I could not find the CONFIG.dataset_args['root']. Additionally, I tried to run this project in a new env. But after I installed the requirements, it went wrong. Thanks!
Hi!
Did you run the code by editing the example launch scripts provided in the
./launch_scripts/
folder of the repository?
CONFIG.dataset_args('root')
should point to the folder where you either downloaded the datasets / where do you want to download and store them automatically.
Thanks for your help! I forgot to modify the file in the ./launch_scripts folder. Best wishes!
PS C:*\px-ntk-pruning-main> ./launch_scripts/cifar10.bat
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/CIFAR10\cifar-10-python.tar.gz
100%|██████████████████████████████████████████████████████████████████████████████| 170498071/170498071 [05:27<00:00, 520776.98it/s]
Extracting data/CIFAR10\cifar-10-python.tar.gz to data/CIFAR10
Files already downloaded and verified
Traceback (most recent call last):
File "C:*\px-ntk-pruning-main\main.py", line 58, in
self.pruner.score(self.model, self.loss_fn, self.data['train'], CONFIG.device)
File "C:*\px-ntk-pruning-main\px-ntk-pruning-main\lib\pruners.py", line 454, in score
for batch_idx, data_tuple in enumerate(dataloader):
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users*\AppData\Roaming\Python\Python311\site-packages\torch\utils\data\dataloader.py", line 433, in iter
self._iterator = self._get_iterator()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users*\AppData\Roaming\Python\Python311\site-packages\torch\utils\data\dataloader.py", line 386, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users*\AppData\Roaming\Python\Python311\site-packages\torch\utils\data\dataloader.py", line 1039, in init
w.start()
File "C:\ProgramData\anaconda3\Lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\multiprocessing\context.py", line 336, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\multiprocessing\popen_spawn_win32.py", line 94, in init
reduction.dump(process_obj, to_child)
File "C:\ProgramData\anaconda3\Lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'SeededDataLoader.init.
I tried to run it on my Windows and Linux servers. Unfortunately, it made the same mistakes. I deeply hope you can help me solve the problem.
Oh, I see. It's a known issue in pytorch and it has to do with how the operating system implements inter-process communication. A short-term fix is to either set in the config --num_workers=0
or modify line 45 of datasets/utils.py
file as:
worker_init_fn = None # previously it was set to seed_worker
Oh, I see. It's a known issue in pytorch and it has to do with how the operating system implements inter-process communication. A short-term fix is to either set in the config
--num_workers=0
or modify line 45 ofdatasets/utils.py
file as:worker_init_fn = None # previously it was set to seed_worker
It works! I appreciate your patience! :)
I could not find the CONFIG.dataset_args['root'].
Additionally, I tried to run this project in a new env. But after I installed the requirements, it went wrong.
Thanks!