MaKaRuiNah / SHENet

19 stars 5 forks source link

UCY video dataset for scene features #1

Open rashidch opened 1 year ago

rashidch commented 1 year ago

Hi, UCY video data link is broken: https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data

How to get the video data for extracting scene features? Can you provide the UCY videos via google drive?

Thank You.

MaKaRuiNah commented 1 year ago

Hi, UCY video data link is broken: https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data

How to get the video data for extracting scene features? Can you provide the UCY videos via google drive?

Thank You.

I'm sorry to hear that you're having trouble accessing the link. I just tested it and was able to open it without any issues. You can try it more times.If you're still unable to access the video data, we may be able to share you via google drive.

rashidch commented 1 year ago

It is working now and I can download now.

How did you evaluate on ETH-UCY dataset. Did you follow the leave-one-scene-out data split from ref [16]?

image

image

There are three directories in the ETH split of your provided dataset. The train and val directories contain sets from hotel, student, zara1, and zara2. But I am confused about the test set. How is it generated? Is it the same as the ETH train set that was left out and will be used as a test for this split? or is it separate test data from ETH?

rashidch commented 1 year ago

Hi, UCY video data link is broken: https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data How to get the video data for extracting scene features? Can you provide the UCY videos via google drive? Thank You.

I'm sorry to hear that you're having trouble accessing the link. I just tested it and was able to open it without any issues. You can try it more times.If you're still unable to access the video data, we may be able to share you via google drive.

It would be great if you still provide the video frames and scene features so that we can train and test your model with the same data.

MaKaRuiNah commented 1 year ago

Hi, Regarding the data split, we follow the same leave-one-scene-out data split as previous works, but we only ignore the data without video files. Therefore, the test file is the same as the ETH train set that was left out. We use the processed data from Ynet, converting the world coordinates to image coordinates. We can ignore the “zara3,students001,uni_examples” scenes without video files. For image frames and scene features, we use the script ./preprocess/video2image.py to save frames and the Swin Transformer to extract scene features. Currently, we do not save the scene features to local. If you would like to use scene features directly, it might be helpful to download from next, providing their own scene features and visual features.