Thank you for the wonderful work.
I tried changing the batch_size in test_tryon.py during the inference, but it didn't have any effect for my result.
I use only one GPU(1080ti) and run on Ubuntu20.04 LTS , so i made a few change for the code.
1, delete distributed function
torch.distributed.init_process_group( 'nccl', init_method='env://' )
Thank you for the wonderful work. I tried changing the batch_size in test_tryon.py during the inference, but it didn't have any effect for my result. I use only one GPU(1080ti) and run on Ubuntu20.04 LTS , so i made a few change for the code.
1, delete distributed function
torch.distributed.init_process_group( 'nccl', init_method='env://' )
2,
change to
3,
change to
model_gen = gen_model.to(device)
Here is my .sh file The result remains the same regardless of whether I change the batch size to 1 or 64.
However, when I change
gen_model.train()
togen_model.eval()
, the result remains the same regardless of how I modify the batch size."I think the problem might be that the batch size wasn't properly changed. Can someone help me resolve this issue? Thanks~