DerrickXuNu / OpenCOOD

[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.
https://mobility-lab.seas.ucla.edu/opv2v/
Other
663 stars 99 forks source link

Question for feature-level coordinate transformation #65

Closed zllxot closed 1 year ago

zllxot commented 1 year ago

Hi, when I ran the V2VNetFusion module in point_pillar_v2vnet model (the relevant codes are as follows)

      for b in range(B):

            # number of valid agent
            N = record_len[b]
            # (N,N,4,4)
            # t_matrix[i, j]-> from i to j
            t_matrix = pairwise_t_matrix[b][:N, :N, :, :]
            updated_node_features = []
            # update each node i
            for i in range(N):
                # (N,1,H,W)
                mask = roi_mask[b, :N, i, ...]

                current_t_matrix = t_matrix[:, i, :, :]
                current_t_matrix = get_transformation_matrix(
                    current_t_matrix, (H, W))

                # (N,C,H,W)
                neighbor_feature = warp_affine(batch_node_features[b],
                                               current_t_matrix,
                                               (H, W))
                ...

I found through feature visualization that no matter which cav (current_t_matrix = t_matrix[:, i, :, :]) all other cavs are projected to, their transformed features (neighbor_feature) are always in the ego‘s coordinate system after being processed by the warp_affine function. Could you please let me know what went wrong? Thank you in advance.

DerrickXuNu commented 1 year ago

Please refer to this issue: https://github.com/DerrickXuNu/OpenCOOD/issues/49

zllxot commented 1 year ago

got it. thanks.