cyclomon / UNSB

Official Repository of "Unpaired Image-to-Image Translation via Neural Schrödinger Bridge" (ICLR 2024)
MIT License
154 stars 5 forks source link

Batch Size #5

Closed JigneshChowdary closed 1 year ago

JigneshChowdary commented 1 year ago

I have enjoyed reading your paper, while I was implementing your work on my dataset, I have increased the default batch size from 1 to 8 in base options. And it showed this error:

Traceback (most recent call last): File "train.py", line 44, in model.data_dependent_initialize(data,data2) File "sb_model.py", line 106, in data_dependent_initialize self.forward() # compute fake images: G(A) File "sb_model.py", line 184, in forward for t in range(self.time_idx.int().item()+1): ValueError: only one element tensors can be converted to Python scalars"

DuaNoDo commented 1 year ago

Hello, I also tried running this code with batch_size 8, but it didn't work. The authors said "we trained the model for 400 epochs, with batch size of 1." on their paper.

And i added some for loops, and multi-batch-training works. However, learning did not proceed normally. Maybe i am wrong.

Or i think this code maybe didn't support multi-batch-size training.

cyclomon commented 1 year ago

Hi, in this version, the training process is available with only single GPU and batch size of 1.

As the training process is optimized on batch size=1 setting, there would be some problems with larger batch size.

We will upload more generalized version later.

Sorry for inconvenience.