RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[1, 2, 800, 1066] to have 3 channels, but got 2 channels instead
But I print the img size at
def evaluate(self, eval_loader, mode, rel_topk=100):
self.model.eval()
results = []
count = 0
for data in tqdm(eval_loader):
imgs, targets, filenames = data
imgs = [img.to(self.device) for img in imgs]
# targets are list type
targets = [{k: v.to(self.device) for k, v in t.items()} for t in targets]
bs = len(imgs)
target_sizes = targets[0]['size'].expand(bs, 2)
target_sizes = target_sizes.to(self.device)
print("====================", imgs[0].size())
outputs_dict = self.model(imgs)
file_name = filenames[0]
pred_out = self.postprocessors(outputs_dict, file_name, target_sizes,
rel_topk=rel_topk)
results.append(pred_out)
count += 1
I got (3, 800, 1066). But in model forward, channel size changes to 2 ?
Please help me~
I used ASNet_hoia_res50.pth on HOIA dataset
Hi, thanks for your work! When I eval on hoia dataset, I got this ERROR:
bash eval_hoia.sh
But I print the img size at
I got
(3, 800, 1066)
. But in model forward, channel size changes to 2 ? Please help me~ I usedASNet_hoia_res50.pth
on HOIA dataset