Open mariolew opened 5 years ago
It's not implenmented yet but I have implenmented myself. How about exchange? You give me ASI, I give you that.
@Zju-George I've converted the result motion already so that I can use it to train policy, I have obtained the root translation myself, but I think it's not good enough, now I want to know how the author compute the root translation.
I have also obtained The root translation using optimal method, Maybe WE can compare The result using The same video clip? Also, did you implement ASI? And does it improve a lot?
Yes, I've implemented ASI, it converges much faster than RSI, it obtains good results on some easy clips(those shows in the paper), but it still fails if the motion result is not good enough. Maybe you can show me some result on caixukun?
I am amazed that you know caixukun! Hahaha, but I wanna share you something more difficult. Actually when camera is not moving or rotating the method is robust but I have also figured out that when camera has only rotation, I can also obtain the root trans.
But I also encounter one big problem, we have no prior limitation on the ankle joints but when ankle joints are not touching the ground or have some wierd rotation, it is very hard to train the policy. And I think it is something that cannot be handled by ASI, so do you have any idea about this?
Hi, we are trying to reproduce results from the paper and are stuck at the export phase between SFV and DeepMimic. We have a working BVH to DeepMimic translator but are not sure about the format of the BVH file. There's a line in refine_video.py
that invokes a function write2bvh
but it's not included or mentioned anywhere else.
Thank you for your help.
Hi @Zju-George, I am wondering how you calculated the global root translation you mentioned above ? Can you explain how you did that ? Thank you very much !
Hi @Zju-George, I am wondering how you calculated the global root translation you mentioned above ? Can you explain how you did that ? Thank you very much !
By estimating the camera movement first ( I use OpenCV, but of course you can use any camera tracking software like PFTrack), then I optimize the 2D joints loss using the camera information.
Hi, we are trying to reproduce results from the paper and are stuck at the export phase between SFV and DeepMimic. We have a working BVH to DeepMimic translator but are not sure about the format of the BVH file. There's a line in
refine_video.py
that invokes a functionwrite2bvh
but it's not included or mentioned anywhere else.Thank you for your help.
Check this Zju-George/DeepMimic#2 out!
I've converted the result motion already so that I can use it to train policy, I have obtained the root translation myself, but I think it's not good enough, now I want to know how the author compute the root translation.
Hi, @mariolew how do you obtain the root translation? I couldn't figure out how to estimate the root position. Can you give any clue? Thanks very much!
Hi, I noticed that '# from jason.bvh_core import write2bvh' this line has been commented out, so how to write to bvh?