Open zql1314 opened 5 years ago
Hey, the FDD dataset (short for Fall Detection Dataset) is at this link: http://le2i.cnrs.fr/Fall-detection-Dataset
Note that some of this dataset is not labelled at all, so either you have to label them yourself, or just go with what the labelled one (assuming that you are doing supervised learning). Also, the labelling is not always very clear and can be incorrect sometimes (not very often but it does happen), so I went in and manually fixed whatever I found to be wrong.
There are also 2 other fall video datasets: Multicam at http://www.iro.umontreal.ca/~labimage/Dataset/ and URFD at http://fenix.univ.rzeszow.pl/~mkepski/ds/uf.html . They are not really labelled frame by frame though.
Thank you very much!------------------ 原始邮件 ------------------ 发件人: "Dzung Pham"notifications@github.com 发送时间: 2019年6月24日(星期一) 中午12:12 收件人: "vietdzung/fall-detection-two-stream-cnn"fall-detection-two-stream-cnn@noreply.github.com; 抄送: "zql1314"1194871351@qq.com;"Author"author@noreply.github.com; 主题: Re: [vietdzung/fall-detection-two-stream-cnn] about dataset (#2)
Hey, the FDD dataset (short for Fall Detection Dataset) is at this link: http://le2i.cnrs.fr/Fall-detection-Dataset
Note that some of this dataset is not labelled at all, so either you have to label them yourself, or just go with what the labelled one (assuming that you are doing supervised learning). Also, the labelling is not always very clear and can be incorrect sometimes (not very often but it does happen), so I went in and manually fixed whatever I found to be wrong.
There are also 2 other fall video datasets: Multicam at http://www.iro.umontreal.ca/~labimage/Dataset/ and URFD at http://fenix.univ.rzeszow.pl/~mkepski/ds/uf.html . They are not really labelled frame by frame though.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
Hey,thanks for your help! I download the dataset, but I can not understand how to run the utils.py to get the fdd.hdf5. If you could give the file or give some guides, it would be very helpful to me.Thans again!
Yeah it is definitely not clear at all. I will need to update the code to make it more usable to others. But the big idea is to split the videos into fall frames and non-fall frames. You will need to generate both the original frames from the videos and the corresponding motion history image frames. If you can figure it out then that would be a great learning experience. It will take be a while before I can get down to update the code, but I will try to make it happen some time this summer.
Thanks for your help! I make it sucess. But i find it can not identify whether to fall in a multiplayer scene and it can not distinguish sitting and falling. What can I do to perfect the program? Thank you!
Multi-people: The FDD dataset only has 1 single actor in each video, so if you want to do multi-people fall detection, the first thing you will need to do is to find/create a dataset with multiple actors in a video. This is a challenging task, and I don't think there's any dataset like that out there at the moment. Assuming you have such a dataset, I think the next step is to figure out how to represent the information in such a way that it is clear which person is falling. Using original frames is one way, but using motion history image (or optical flow) might be challenging because you can't really distinguish between the blobs.
Sitting vs Falling: This is a good issue that I have been trying to explore more. I think more data with diverse range of actions will be needed to make the model really robust. You can try to augment the FDD dataset (i.e., flipping, rotating), or you can try to train this problem as a multi-task problem. In the end, I don't think the FDD data is big enough, so good data generation is crucial. If you use the videos in the FDD dataset, then the model can distinguish between sitting and falling, but it can occasionally make mistakes.
These are just my hypotheses, but I hope it can help. The current model will work with the current dataset, but going beyond that is not gonna be easy given the small size. If you manage to solve these issues, I'd love to hear about how you did it!
Thank you very much for your help.------------------ 原始邮件 ------------------ 发件人: "Dzung Pham"notifications@github.com 发送时间: 2019年6月26日(星期三) 中午11:43 收件人: "vietdzung/fall-detection-two-stream-cnn"fall-detection-two-stream-cnn@noreply.github.com; 抄送: "zql1314"1194871351@qq.com;"Author"author@noreply.github.com; 主题: Re: [vietdzung/fall-detection-two-stream-cnn] about dataset (#2)
Multi-people: The FDD dataset only has 1 single actor in each video, so if you want to do multi-people fall detection, the first thing you will need to do is to find/create a dataset with multiple actors in a video. This is a challenging task, and I don't think there's any dataset like that out there at the moment. Assuming you have such a dataset, I think the next step is to figure out how to represent the information in such a way that it is clear which person is falling. Using original frames is one way, but using motion history image (or optical flow) might be challenging because you can't really distinguish between the blobs.
Sitting vs Falling: This is a good issue that I have been trying to explore. I think more data with diverse range of actions will be needed. You can try to augment the FDD dataset (i.e., flipping, rotating), or you can try to train this problem as a multi-task problem. In the end, I don't think the FDD data is big enough, so good data generation is crucial.
These are just my hypotheses, but I hope it can help. The current model will work with the current dataset, but going beyond that is not gonna be easy given the small size. If you manage to solve these issues, I'd love to hear about how you did it!
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
Most of the FDD download links do not work (besides "office 2", which is much more smaller than the rest). Do you have the data set downloaded and are kind enough to upload it somewhere? 😃
@vietdzung
Hello @vietdzung ...even I am facing same problem of @pasandrei ...can you provide any Google drive Link of the datasets?Thank you☺️
Hello @pasandrei @pani-bot, sorry for the late response. Here's my link: https://drive.google.com/drive/folders/1psUmqQmZMePXpWZWXQ2ObgDHI4Z5z2i1?usp=sharing
Note that not everything is labelled.
Thank you very much :D
Hello @pasandrei @pani-bot, sorry for the late response. Here's my link: https://drive.google.com/drive/folders/1psUmqQmZMePXpWZWXQ2ObgDHI4Z5z2i1?usp=sharing
Note that not everything is labelled.
Thank you so much @vietdzung
Thank you very much :D
Hello @pasandrei, can you share this FDD dataset again, thank you so much.
@HHungg I reuploaded it. Here's the link: https://drive.google.com/drive/folders/19KTp4-0Q4RL7MRsd0Gqxbt-1oKA-pbeY?usp=sharing
Hello @vietdzung i'am Sorry ,But I cannot download files Drive says the download quota for these files has been exceeded! Is there another place? Or a link to these files
hi @vietdzung , thanks for your work, but the FDD dataset website links do not work, and your Google Driver link is not available, can your reupload the dataset? thanks a lot
@HHungg I reuploaded it. Here's the link: https://drive.google.com/drive/folders/19KTp4-0Q4RL7MRsd0Gqxbt-1oKA-pbeY?usp=sharing
it seems there is no Annotations_all.txt"
@vietdzung
@vietdzung are there labeled by frame versions of the following two?
There are also 2 other fall video datasets: Multicam at http://www.iro.umontreal.ca/~labimage/Dataset/ and URFD at http://fenix.univ.rzeszow.pl/~mkepski/ds/uf.html . They are not really labeled frame by frame though.
New link for those looking at this in 2022: https://drive.google.com/drive/folders/1v-fTxzRH4PLWKIyd76kLQPt9eFJ92N5j?usp=sharing
@phatk legend!
Could you tell me which dataset you use? I can not find the download link fot the FDD dataset, could you share it with me?Thank you !