Closed slyang2021 closed 2 years ago
Hi, we have used the following code to make computation distributed on multiple GPUs: https://github.com/VICO-UoE/DatasetCondensation/blob/a84efcc0636b6578e398bb5614edc506e0f513a0/main_DM.py#L144 We didn't find this problem in our experiments.
` if 'BN' not in args.model: # for ConvNet loss = torch.tensor(0.0).to(args.device) for c in range(num_classes): img_real = get_images(c, args.batch_real) img_syn = image_syn[cargs.ipc:(c+1)args.ipc].reshape((args.ipc, channel, im_size[0], im_size[1]))