open-mmlab / mmskeleton

A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
Apache License 2.0
2.94k stars 1.04k forks source link

How to make action recognition result visualization #398

Open SKBL5694 opened 3 years ago

SKBL5694 commented 3 years ago

How to generate action recognition gif like the demo in ''mmskeleton/demo/recognition

MaarufB commented 3 years ago

Bro did you figure that out? I am still trying on how to generate one.

SKBL5694 commented 3 years ago

Bro did you figure that out? I am still trying on how to generate one.

Yes, I've nailed it. But I've finished my work this week. If you're interested in it, maybe I can share you my experience on it during my working time.

MaarufB commented 3 years ago

Yes bro I'm so interested. I'd been trying look for that source, I would be thankful if you could show me how. 'Till now still working on it but nothings work.

SKBL5694 commented 3 years ago

Yes bro I'm so interested. I'd been trying look for that source, I would be thankful if you could show me how. 'Till now still working on it but nothings work.

I'm glad to help you about that if I can. But I'm not in my work place now. I think I can give you some help two days later. Because the whole process is a bit complicated. So I want to know if you understand Chinese, or if you use WeChat or other communication software that can communicate in real time, if so, it will make our communication more convenient.

MaarufB commented 3 years ago

@SKBL5694 hello bro. Did you use openpose for action recoginiton using mmskeleton?

SKBL5694 commented 3 years ago

@SKBL5694 hello bro. Did you use openpose for action recoginiton using mmskeleton?

I do use openpose. Actually, I use the previous version of mmskeleton called st-gcn to visualize the result.

MaarufB commented 3 years ago

Is it possible to use the current mmskeleton to visualize the video result for action recognition? And also the pose_demo for mmskeleton doesn't include the action recognition.

SKBL5694 commented 3 years ago

Is it possible to use the current mmskeleton to visualize the video result for action recognition? And also the pose_demo for mmskeleton doesn't include the action recognition.

Good question. Based on my understanding of this framework, I probably say no. Because in the old version(st-gcn), there is a off-the-shelf script which supports train,test,visualization(both real-time and offline). If you use mmskeleton, you can not use this script.

MaarufB commented 3 years ago

Ohh I seee. Thanks for saving my time bro. Can I use the st_gcn under the mmskeleton repo "deprecated/origin_stgcn_repo" ? . Thanks once again bro.

SKBL5694 commented 3 years ago

Ohh I seee. Thanks for saving my time bro. Can I use the st_gcn under the mmskeleton repo "deprecated/origin_stgcn_repo" ? . Thanks once again bro.

I'm not sure about that. I do the visualization by this [https://github.com/yysijie/st-gcn].

MaarufB commented 3 years ago

Thanks for the info bro. You really help me a lot.

SKBL5694 commented 3 years ago

Thanks for the info bro. You really help me a lot.

My pleasure.

MaarufB commented 3 years ago

Bro, How about the training of own dataset?

SKBL5694 commented 3 years ago

Bro, How about the training of own dataset?

Two tips. First, you should know st-gcn use two kinds of dataset: kinetics and NTU-RGB-D. The former is 18 joints per person with three dimensions of coordinates :(x,y,c) c is the confidence of the openpose output. And the latter has 25 joints per person with three dimensions which the third coordinate is z instead of confidence. Interestingly, both of them can be acceptted to the net. Though I'm a little confused about this operation, the paper also mentions it in 3.2, paragraph 2 and 3. Second, so you can choose any format(18 joints with confidence or 25 joints with z) you like to build you own dataset. You can refer to these 2 files NTU dataset and Kinetics dataset to build your own dataset.

MaarufB commented 3 years ago

Sorry for the late response bro. May I know how did you build your own dataset? I was just following the given procedure on how to build own dataset but something's not right when i ran the training script.

SKBL5694 commented 3 years ago

Sorry for the late response bro. May I know how did you build your own dataset? I was just following the given procedure on how to build own dataset but something's not right when i ran the training script.

Emm... I don't know about 'the given procedure on how to build own dataset'. For me, I use openpose to extract joints data from my own videos(have extracted to frames). Then I write a script to change the openpose output to an binary file to feed the net. It's a hard and painful work, I'm working on this script too.

MaarufB commented 3 years ago

Actually I followed this one https://github.com/open-mmlab/mmskeleton/blob/master/doc/CUSTOM_DATASET.md. seems not workable doesn't prompt the training loss

SKBL5694 commented 3 years ago

Actually I followed this one https://github.com/open-mmlab/mmskeleton/blob/master/doc/CUSTOM_DATASET.md. seems not workable doesn't prompt the training loss

Do you mean that you have generated own dataset's .json file like this? { "info": { "video_name": "skateboarding.mp4", "resolution": [340, 256], "num_frame": 300, "num_keypoints": 17, "keypoint_channels": ["x", "y", "score"], "version": "1.0" }, "annotations": [ { "frame_index": 0, "id": 0, "person_id": null, "keypoints": [[x, y, score], [x, y, score], ...] }, ... ], "category_id": 0, }

MaarufB commented 3 years ago

Yes bro.

SKBL5694 commented 3 years ago

Yes bro.

And you are using mmskleton not st-gcn to train, am I right?

MaarufB commented 3 years ago

Yes bro, Sorry, I wanna try it and if does'nt work I'll use st-gcn.

SKBL5694 commented 3 years ago

Yes bro, Sorry, I wanna try it and if does'nt work I'll use st-gcn.

Relax,bro, I didn't mean to blame you. I am not familiar with mmskeleton, because I want to visualize my result and make a real-time recognition program.But mmskeleton don't support those operations. So if you have these questions, I suggest you to search 'loss' in the issues. I remember during my searching for answers of my problem, I've seen similar questions as yours(loss do not decrease). If you really want to train with mmskeleton and make visualization #291 may be helpful.(though I can not follow those steps successfully)

MaarufB commented 3 years ago

Okay bro thanks. I really got stuck of this and wanna try your tips later I think. Thanks bro

zren2 commented 3 years ago

Bro, How about the training of own dataset?

Two tips. First, you should know st-gcn use two kinds of dataset: kinetics and NTU-RGB-D. The former is 18 joints per person with three dimensions of coordinates :(x,y,c) c is the confidence of the openpose output. And the latter has 25 joints per person with three dimensions which the third coordinate is z instead of confidence. Interestingly, both of them can be acceptted to the net. Though I'm a little confused about this operation, the paper also mentions it in 3.2, paragraph 2 and 3. Second, so you can choose any format(18 joints with confidence or 25 joints with z) you like to build you own dataset. You can refer to these 2 files NTU dataset and Kinetics dataset to build your own dataset.

Hi bro, I have installed the st-gcn and I can run the demo code, but I want to use the NTU-RGB-D model to run the demo,I get
the following errors: Traceback (most recent call last): File "main.py", line 31, in p = Processor(sys.argv[2:]) File "/home/weilong/st-gcn/processor/io.py", line 28, in init self.load_weights() File "/home/weilong/st-gcn/processor/io.py", line 75, in load_weights self.arg.ignore_weights) File "/usr/local/lib/python3.6/dist-packages/torchlight-1.0-py3.6.egg/torchlight/io.py", line 89, in load_weights

File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1045, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Model: size mismatch for A: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for data_bn.weight: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for data_bn.bias: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for data_bn.running_mean: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for data_bn.running_var: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([54]). size mismatch for edge_importance.0: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.1: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.2: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.3: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.4: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]). size mismatch for edge_importance.5: copying a param with shape torch.Size([3, 25, 25]) from checkpoint, the shape in current model is torch.Size([3, 18, 18]).

do you know what's the problem? 我也是一个中国学生,如果方便的请加一下我微信13735698625,谢谢!

ChalsonLee commented 3 years ago

Bro, How about the training of own dataset?

Two tips. First, you should know st-gcn use two kinds of dataset: kinetics and NTU-RGB-D. The former is 18 joints per person with three dimensions of coordinates :(x,y,c) c is the confidence of the openpose output. And the latter has 25 joints per person with three dimensions which the third coordinate is z instead of confidence. Interestingly, both of them can be acceptted to the net. Though I'm a little confused about this operation, the paper also mentions it in 3.2, paragraph 2 and 3. Second, so you can choose any format(18 joints with confidence or 25 joints with z) you like to build you own dataset. You can refer to these 2 files NTU dataset and Kinetics dataset to build your own dataset.

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

SKBL5694 commented 3 years ago

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

ChalsonLee commented 3 years ago

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

yes, when I training st-gcn on kinects dataset.

SKBL5694 commented 3 years ago

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

yes, when I training st-gcn on kinects dataset. Maybe this will be helpful. https://github.com/yysijie/st-gcn/blob/221c0e152054b8da593774c0d483e59befdb9061/processor/demo_offline.py#L143-L147

ChalsonLee commented 3 years ago

In the kinetics 3 dimensions of coordinates :(x,y,c),why x,y less than 1?looking forward to your reply,thanks

Do you mean st-gcn?

yes, when I training st-gcn on kinects dataset. Maybe this will be helpful. https://github.com/yysijie/st-gcn/blob/221c0e152054b8da593774c0d483e59befdb9061/processor/demo_offline.py#L143-L147

thanks a lot

SKBL5694 commented 3 years ago

thanks a lot

:)

d0683228 commented 3 years ago

@SKBL5694 hi I am studying st-gcn recently and I already train a new model with my own dataset ( with mmskeleton ) now I want to visualize the video result with my model .pth file ( with old version st-gcn ) but I don't know the actual step can you provide me more detail step so I can follow? thx

SKBL5694 commented 3 years ago

@SKBL5694 hi I am studying st-gcn recently and I already train a new model with my own dataset ( with mmskeleton ) now I want to visualize the video result with my model .pth file ( with old version st-gcn ) but I don't know the actual step can you provide me more detail step so I can follow? thx

Sry, I don't use the framework 'mmskeleton', I use st-gcn to train and visualize the result.

Donny2333 commented 3 years ago

@SKBL5694 Thanks for your guidance. I have successfully trained and tested my own dataset (with mmskeleton). And here is my test result, it's pretty cool.

Top 1: 97.86% Top 5: 100.00%

But, when I use st-gcn to visualize. It shows an obvious mistake. Even the video that is trained. eg. I transfer a video into the model, that is someone waves his hand to the left. But the vote doesn't show the WAVE_LEFT label.

I'm puzzled. Did I miss something? Hope for your advice.

SKBL5694 commented 3 years ago

@Donny2333 I guess you use the test config file named 'demo_offline' , am I right?

SKBL5694 commented 3 years ago

@Donny2333 BTW, I may not be able to reply to github messages in time, if you are in a hurry, you can add my wechat(refer to my profile), if not, we can discuss it here for future reference. It's up to you.

Donny2333 commented 3 years ago

@SKBL5694 Fantastic! My Email is aiiili@qq.com Maybe, it's not convenient to leave our personal information here. Hope to get your understanding.

Liqi255 commented 3 years ago

Hello, I am studying mmskeleton or st-gcn recently. I seem to see hope through your answers. I urgently need your help. There are many questions to ask. It may not be particularly convenient to use GitHub. Maybe I can add you WeChat or email. Thank you again!

zren2 commented 3 years ago

Hi, you'd better add my wechat (13735698625).I use it everyday.

Yong.

On Wed, Sep 1, 2021 at 8:54 AM Liqi255 @.***> wrote:

Hello, I am studying mmskeleton or st-gcn recently. I seem to see hope through your answers. I urgently need your help. There are many questions to ask. It may not be particularly convenient to use GitHub. Maybe I can add you WeChat or email. Thank you again!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/open-mmlab/mmskeleton/issues/398#issuecomment-910255509, or unsubscribe https://github.com/notifications/unsubscribe-auth/AF2PRNUGL7WH6S6GPLRQT3LT7YPA7ANCNFSM4YSLAYIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

CAUTION: EXTERNAL SENDER Do not click any links, open any attachments, or REPLY to the message unless you trust the sender and know the content is safe.

lakpa-tamang9 commented 2 years ago

How much fps did you get in real time testing?

zren2 commented 2 years ago

atound 20.

在 2022年9月12日,下午8:58,lakpa-tamang9 @.***> 写道:

 How much fps did you get in real time testing?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.

CAUTION: EXTERNAL SENDER Do not click any links, open any attachments, or REPLY to the message unless you trust the sender and know the content is safe.

xiaoming970115 commented 1 year ago

@zren2 would you tell me how to visualize using hrnet instead of using openpose API ? I need your help