LambdaLabsML / examples

Deep Learning Examples
MIT License
805 stars 103 forks source link

EOFError: Ran out of input #35

Closed 4thfever closed 1 year ago

4thfever commented 1 year ago

Hi,

Thanks for your great repo. Could you please help me figure out why I get this error?

I am on Windows platform and I fix others issues according to the guide on https://github.com/hlky/sd-enable-textual-inversion

`Python Traceback (most recent call last): File "", line 1, in File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input

Traceback (most recent call last): File "C:\Users\rusong.li\Desktop\finetune\main.py", line 906, in trainer.fit(model, data) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 553, in fit self._run(model) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 918, in _run self._dispatch() File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 986, in _dispatch self.accelerator.start_training(self) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 161, in start_training self._results = trainer.run_stage() File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 996, in run_stage return self._run_train() File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1045, in _run_train self.fit_loop.run() File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 111, in run self.advance(*args, kwargs) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 200, in advance epoch_output = self.epoch_loop.run(train_dataloader) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 111, in run self.advance(*args, *kwargs) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\epoch\training_epochloop.py", line 118, in advance , (batch, is_last) = next(dataloader_iter) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\profiler\base.py", line 104, in profile_iterable value = next(iterator) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 625, in prefetch_iterator last = next(it) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 546, in next return self.request_next_batch(self.loader_iters) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 532, in loader_iters self._loader_iters = self.create_loader_iters(self.loaders) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 590, in create_loader_iters return apply_to_collection(loaders, Iterable, iter, wrong_dtype=(Sequence, Mapping)) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 96, in apply_to_collection return function(data, args, kwargs) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\torch\utils\data\dataloader.py", line 435, in iter return self._get_iterator() File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\torch\utils\data\dataloader.py", line 381, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\torch\utils\data\dataloader.py", line 1034, in init w.start() File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 336, in _Popen return Popen(process_obj) File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'hf_dataset..pre_process'. Did you mean: '_loader_iters'? `