Open harryc7 opened 1 year ago
请问您的问题解决了吗
Have you solved your problem?
Met the same problem while training
pls share solution if you have dealt with it. thanks
Hey guys, i solved this problem by changing the value of workers_per_gpu and batchsize_per_gpu in the config yml file
pls share solution if you have dealt with it. thanks
Hey guys, i solved this problem by changing the value of workers_per_gpu and batchsize_per_gpu in the config yml file
D:\ANACONDA\envs\yy\lib\site-packages\pytorch_lightning\utilities\data.py:110: UserWarning:
DataLoader
returned 0 length. Please make sure this was your intention. rank_zero_warn( D:\ANACONDA\envs\yy\lib\site-packages\pytorch_lightning\utilities\data.py:141: UserWarning: Total length ofCombinedLoader
across ranks is zero. Please make sure this was your intention. rank_zero_warn(Trainer.fit
stopped: No training batches.how to deal with it?
Hey guys, i solved this problem by changing the value of workers_per_gpu and batchsize_per_gpu in the config yml file
D:\ANACONDA\envs\yy\lib\site-packages\pytorch_lightning\utilities\data.py:110: UserWarning:
DataLoader
returned 0 length. Please make sure this was your intention. rank_zero_warn( D:\ANACONDA\envs\yy\lib\site-packages\pytorch_lightning\utilities\data.py:141: UserWarning: Total length ofCombinedLoader
across ranks is zero. Please make sure this was your intention. rank_zero_warn(Trainer.fit
stopped: No training batches.how to deal with it?