Closed yjc765 closed 4 years ago
I got the mocap data from mixamo.com so it is different from DeepMimic. I trained a few different animations (walk, run, backflip) - they take about 2 days on a high-end desk top to train.
On Wed, Dec 18, 2019 at 5:57 AM Jiacheng Yang notifications@github.com wrote:
Hello @Sohojoe https://github.com/Sohojoe , Thank you for the good repo! I have a small question. When you trained the StyleTransfer002, you needed the mocap data (realistic reference data) for training. I wonder where is the mocap data from? If it is from the Deepmimic repo, how did you convert the data to fit your human model?
Thank you very much!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Sohojoe/ActiveRagdollStyleTransfer/issues/8?email_source=notifications&email_token=ABBSLIC2JV2EYSNHH4LTI4TQZIT3XA5CNFSM4J4LMPJ2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IBLD7LQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBSLIDD4QOHPJY2NMB2IKDQZIT3XANCNFSM4J4LMPJQ .
Thank you for your reply! I checked mixamo.com. It's full of various animations. Could you please tell me how did you download the mocap data? I saw there are two formats avaliable, .fbx and .dae. I downloaded both but I cannot open .fbx and the data in .dae is not understandable. Which one did you download and how did you convert it to fit humanoid of your model?
Thank you very much!
I got the mocap data from mixamo.com so it is different from DeepMimic. I trained a few different animations (walk, run, backflip) - they take about 2 days on a high-end desk top to train. … On Wed, Dec 18, 2019 at 5:57 AM Jiacheng Yang @.***> wrote: Hello @Sohojoe https://github.com/Sohojoe , Thank you for the good repo! I have a small question. When you trained the StyleTransfer002, you needed the mocap data (realistic reference data) for training. I wonder where is the mocap data from? If it is from the Deepmimic repo, how did you convert the data to fit your human model? Thank you very much! — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#8?email_source=notifications&email_token=ABBSLIC2JV2EYSNHH4LTI4TQZIT3XA5CNFSM4J4LMPJ2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IBLD7LQ>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBSLIDD4QOHPJY2NMB2IKDQZIT3XANCNFSM4J4LMPJQ .
Hi @yjc765 .fbx for unity is ok. You can just pull it into unity assets window, and dont forget to set its AnimationType - RIG - Inspector as Humanoid, then the controller you download is avaliable in Sohojoe's project!
@yjc765 - let us know if you have more issues - closing for now
Hi @yjc765 .fbx for unity is ok. You can just pull it into unity assets window, and dont forget to set its AnimationType - RIG - Inspector as Humanoid, then the controller you download is avaliable in Sohojoe's project!
Thanks! @BenJaminB1ue Do you think it's possible to transfer .fbx to csv file?
@BenJaminB1ue @Sohojoe Sorry, I still don't get how to collect the trajactory (frames of joint-values) of a motion in mixamo.com. I downloaded the .fbx file and can import it as a asset on unity. But I can't collect the data. Do you have a simple example or script of how to do this?
You reference the .fbx from the Animation controller within the Unity Editor - this is a slighly different code base but see Motion: MarathonManBackflip
the MarathonManBackflip is a fbx file exported from mixamo
@BenJaminB1ue @Sohojoe Sorry, I still don't get how to collect the trajactory (frames of joint-values) of a motion in mixamo.com. I downloaded the .fbx file and can import it as a asset on unity. But I can't collect the data. Do you have a simple example or script of how to do this?
@yjc765 you mean you wanna collect the trajectory and extract it's joint-values from a .fbx file?
Hello @Sohojoe , Thank you for the good repo! I have a small question. When you trained the StyleTransfer002, you needed the mocap data (realistic reference data) for training. I wonder where is the mocap data from? If it is from the Deepmimic repo, how did you convert the data to fit your human model?
Thank you very much!