and print the all_x and all_y in the ERM algorithm like:
def update(self, minibatches):
all_x = torch.cat([x for x,y in minibatches])
all_y = torch.cat([y for x,y in minibatches])
print(all_x.shape)
print(all_y.shape)
I get this output for the parameters and the sizes:
Since all_x and all_y represent batches of shape batch size x C x H x W shouldn't batch_size = 32 instead of the batch size suggested by the shape batch_size = 96 since it is stated with 32 as hyperparameter?
Maybe I missed something since this is exactly multiplied by factor 3.
args.batch_size is the batch size per training environment (in your example, a batch of 32 examples for each of the 3 training environments amounts to a total of 96 examples in all_x).
If I run the ERM on PACS via the following command:
python -m domainbed.scripts.train --data_dir=datasets/ --algorithm ERM --dataset PACS --hparams='{"resnet18": "True"}'
and print the
all_x
andall_y
in theERM
algorithm like:I get this output for the parameters and the sizes:
Since
all_x
andall_y
represent batches of shapebatch size x C x H x W
shouldn'tbatch_size = 32
instead of the batch size suggested by the shapebatch_size = 96
since it is stated with32
as hyperparameter?Maybe I missed something since this is exactly multiplied by factor 3.