facebookresearch / phosa

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild
Other
177 stars 21 forks source link

PyTorch 1.8 RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0 #22

Closed monacv closed 3 years ago

monacv commented 3 years ago

Previously, I didn't have a problem running your code. Could this be because of having to move to PyTorch 1.8? Is there hopefully a solution to this?

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-3-2f4b93b9ad6c> in <module>
      1 segmenter = get_pointrend_predictor()
----> 2 instances = segmenter(image)["instances"]
      3 vis = PointRendVisualizer(image, metadata=MetadataCatalog.get("coco_2017_val"))
      4 Image.fromarray(vis.draw_instance_predictions(instances.to("cpu")).get_image())

~/venv/phosa/lib/python3.8/site-packages/detectron2/engine/defaults.py in __call__(self, original_image)
    249 
    250             inputs = {"image": image, "height": height, "width": width}
--> 251             predictions = self.model([inputs])[0]
    252             return predictions
    253 

~/venv/phosa/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~/venv/phosa/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py in forward(self, batched_inputs)
    147         """
    148         if not self.training:
--> 149             return self.inference(batched_inputs)
    150 
    151         images = self.preprocess_image(batched_inputs)

~/venv/phosa/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py in inference(self, batched_inputs, detected_instances, do_postprocess)
    200         assert not self.training
    201 
--> 202         images = self.preprocess_image(batched_inputs)
    203         features = self.backbone(images.tensor)
    204 

~/venv/phosa/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py in preprocess_image(self, batched_inputs)
    226         """
    227         images = [x["image"].to(self.device) for x in batched_inputs]
--> 228         images = [(x - self.pixel_mean) / self.pixel_std for x in images]
    229         images = ImageList.from_tensors(images, self.backbone.size_divisibility)
    230         return images

~/venv/phosa/lib/python3.8/site-packages/detectron2/modeling/meta_arch/rcnn.py in <listcomp>(.0)
    226         """
    227         images = [x["image"].to(self.device) for x in batched_inputs]
--> 228         images = [(x - self.pixel_mean) / self.pixel_std for x in images]
    229         images = ImageList.from_tensors(images, self.backbone.size_divisibility)
    230         return images

RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0

Screenshot from 2021-03-22 15-59-38

monacv commented 3 years ago

the problem happened since I used a png image by mistake.