Open quxu91 opened 2 years ago
In my case in the config file that I am using I changed IMS_PER_BATCH to 1 and used --num_gpus 1 in argument
@briannlongzhao, @xingyizhou @chenxwh for 1 GPU training, should IMS_PER_BATCH
need to be set to 1 so that the asserts in:
single_batch_collator
def single_batch_collator(batch):
"""
A batch collator that does nothing.
"""
assert len(batch) == 1
return batch[0]
build_gtr_train_loader
def build_gtr_train_loader(cfg, mapper):
...
...
world_size = get_world_size()
batch_size = cfg.SOLVER.IMS_PER_BATCH // world_size
assert batch_size == 1
in GTR/gtr/data/gtr_dataset_dataloader.py
will not raise errors? In my config, my current IMS_PER_BATCH
is 16 and it's always raising errors on those asserts. Will it be ok if the asserts
will just be commented out?
Hi, thanks for your wonderful works! I have only one gpu, can i train the network? If it is possible, how to train on one gpu?