agrimgupta92 / sgan

Code for "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks", Gupta et al, CVPR 2018
MIT License
813 stars 261 forks source link

stanford dataset #36

Open saruvora opened 5 years ago

saruvora commented 5 years ago

Hi, I want to know if you also have the stanford dataset as per the required format for training and testing the model? I have access to the training dataset but the test dataset is with '?' and I cannot really get the ADE and FDE with that, as it requires groundtruth for measuring the loss. It would really help if you can help me out with the stanford test set.

amiryanj commented 5 years ago

@saranshvora I think that's the purpose of trajnet. and you will never know what is the real numbers behind the question signs unless you submit your predictions and see the error. I could not find the annotation files of SDD anywhere. So I downloaded all the dataset, and I can share the annotation files with you, if you want. It needs couple of lines in python to parse that files.

saruvora commented 5 years ago

@amiryanj Hi thank you for the response are the annotations for SDD in<frame> <ped> <x> <y> format ? than yes please I would love to have them.

amiryanj commented 5 years ago

Ok, I've put the files here: https://github.com/amiryanj/StanfordDroneDataset There is a Readme file, that tells you how you should parse the annotations.

saruvora commented 5 years ago

@amiryanj ahh great thank you will work on it :)

ChenYu-Liu commented 5 years ago

sorry, excuse me I want to know the Stanford training and testing dataset need to be normalization? like paper convert all the data to real world coordinates (ETH and UCY), but, i don't know how to convert. it would really help if anyone can help me out with the this question. thanks!

amiryanj commented 5 years ago

afaik the dataset does not contain homography matrices of the videos, however since the videos are recorded with drone from high altitudes and with an image plane almost parallel to the ground, to approximate the real-world coordinates, it would be enough to just multiply the image coordinates by some scaling factor.

LittleFlyFish commented 4 years ago

afaik there is no the dataset does not contain homography matrices of the videos, however since the videos are recorded with drone from high altitudes and with an image plane almost parallel to ground, to approximate the real-world coordinates, it would be enough to just multiply the image coordinates by some scaling factor.

So what's the scaling factor of the dataset? That's very important for me. Thank you very much!

NehaPusalkar commented 4 years ago

Ok, I've put the files here: https://github.com/amiryanj/StanfordDroneDataset There is a Readme file, that tells you how you should parse the annotations.

@amiryanj ! I have a similar question and I am trying to access the link, but it is not working. Do you mind posting an updated link please?

amiryanj commented 4 years ago

@NehaPusalkar I have created a new repo: https://github.com/amiryanj/OpenTraj There, you can find SDD and many other datasets. For SDD we also estimated the scaling factors but some of them might be very inaccurate.

NehaPusalkar commented 4 years ago

@NehaPusalkar I have created a new repo: https://github.com/amiryanj/OpenTraj There, you can find SDD and many other datasets. For SDD we also estimated the scaling factors but some of them might be very inaccurate.

Thank you for sharing this. It is really very helpful!