JDAI-CV / fast-reid

SOTA Re-identification Methods and Toolbox
Apache License 2.0
3.39k stars 830 forks source link

Model Inference Returning NaN Features #710

Closed ConnorMcGuirk closed 11 months ago

ConnorMcGuirk commented 11 months ago

❓ How to do something using fastreid

  1. what inputs you will provide, if any: I am trying to pass patches of detected people from a YOLO model. I have verified the patches have no issues/are of the correct types. I am using code based on BoT-SORT for inference (however, I faced this issue with other code as well)

` def inference(self, image, detections):

    if detections is None or np.size(detections) == 0:
        return []

    H, W, _ = np.shape(image)

    batch_patches = []
    patches = []
    for d in range(np.size(detections, 0)):
        tlbr = detections[d, :4].astype(np.int_)
        tlbr[0] = max(0, tlbr[0])
        tlbr[1] = max(0, tlbr[1])
        tlbr[2] = min(W - 1, tlbr[2])
        tlbr[3] = min(H - 1, tlbr[3])
        patch = image[tlbr[1]:tlbr[3], tlbr[0]:tlbr[2], :]

        # the model expects RGB inputs
        patch = patch[:, :, ::-1]

        # Apply pre-processing to image.
        patch = cv2.resize(patch, tuple(self.cfg.INPUT.SIZE_TEST[::-1]), interpolation=cv2.INTER_LINEAR)
        # patch, scale = preprocess(patch, self.cfg.INPUT.SIZE_TEST[::-1])
        #
        # plt.figure()
        # plt.imshow(patch)
        # plt.show()

        # Make shape with a new batch dimension which is adapted for network input
        patch = torch.as_tensor(patch.astype("float32").transpose(2, 0, 1))
        patch = patch.to(device=self.device).half()

        patches.append(patch)

        if (d + 1) % self.batch_size == 0:
            patches = torch.stack(patches, dim=0)
            batch_patches.append(patches)
            patches = []

    if len(patches):
        patches = torch.stack(patches, dim=0)
        batch_patches.append(patches)

    features = np.zeros((0, 2048))
    # features = np.zeros((0, 768))

    for patches in batch_patches:

        # Run model
        patches_ = torch.clone(patches)
        pred = self.model(patches)`

From the above code snippet it always returns: tensor([[nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan]], device='cuda:0', dtype=torch.float16, grad_fn=<AsStridedBackward0>)

I am currently using duke_sbs_s50.pth / ymls

  1. what outputs you are expecting:

Any idea what I am doing wrong or where I can look?

ConnorMcGuirk commented 11 months ago

Was an issue using .half(), Model I was trying to use was for f32.