abduallahmohamed / Social-STGCNN

Code for "Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction" CVPR 2020
MIT License
483 stars 141 forks source link

Visualize trajectory in picture #63

Open ot4f opened 2 years ago

ot4f commented 2 years ago

Hi, great work! I have some problems when I try to visualize the trajectory in picture like Figure 4 in your paper. I got the original video (such as seq_eth.avi) and extract frames at 0.4s interval. Then I want to map the trajectory x,y back to the corresponding frame pixel coordinate. For example, Frame 840 for eth dataset, the x,y,z of pedestrian 2 is (9.57,6.24, 0),then I use the inverse of homography matrix stored in H.txt to get the frame coordinates (285, 188). But in the 840th frame there is no person at around (285,188) (I assume that the up left is the (0,0) of the Frame). Hope to get your help, Thank you !

abduallahmohamed commented 1 year ago

Hi, Please see our work "Social-Implicit" for better visualization. Some frames without pedestrians were omitted.

Jayoprell commented 1 year ago

Hi, great work! I have some questions:

  1. the data format [frame_id, pedestrian_id, x, y] , how are the x and y obtained? I want to know this because I want to get train data from new video, but I don't know how to get this coordinate.
  2. the above data [frame_id, pedestrian_id, x, y], how can I map x,y back to the frame coordinates? is this formular right: pixel_coordinate = (x, y) * inverse(H) ? Hope to get your help. Thanks!
abduallahmohamed commented 1 year ago

Thanks! 1- Manually annotated. See the ETH-UCY datasets source paper about this. 2- As you said using Homography matrix. You can find it within the datasets source as in (1)

Jayoprell commented 1 year ago

Thanks for the help.

Pradur241 commented 11 months ago

pt_world = np.dot(H_zara02_inv, pt_pixel_resized)

I recently successed with this code. Hope it helps.

RedOne88 commented 11 months ago

Hi, I find your work very interesting. I have a question that has been previously asked. Do you have a code that allows testing the model on a video, for instance, 'seq_eth.avi' as shown in your Figure 4? I have managed to execute your code, and I am eager to see the tracking predictions on real images. I have searched through your work "Social-Implicit", but I couldn't find what I am looking for anywhere.