When I try to increase batch_size to any number (4, 32, ... ) I get strange error:
python main.py --num_iters 100000 --batch_size 4
Namespace(batch_size=4, data_dir='./spmel', dim_emb=256, dim_neck=16, dim_pre=512, freq=16, lambda_cd=1, len_crop=128, log_step=10, num_iters=100000)
Finished loading the dataset...
cuda:0
Start training...
Traceback (most recent call last):
File "solver_encoder.py", line 75, in train
x_real, emb_org = next(data_iter)
UnboundLocalError: local variable 'data_iter' referenced before assignment
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 46, in <module>
main(config)
File "main.py", line 20, in main
solver.train()
File "autovc/solver_encoder.py", line 78, in train
x_real, emb_org = next(data_iter)
File "python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "python3.8/site-packages/torch/utils/data/dataloader.py", line 560, in _next_data
index = self._next_index() # may raise StopIteration
File "python3.8/site-packages/torch/utils/data/dataloader.py", line 512, in _next_index
return next(self._sampler_iter) # may raise StopIteration
StopIteration
When I try to increase batch_size to any number (4, 32, ... ) I get strange error:
Is this bug?