Open kbonnen opened 2 years ago
"capture volume" refers to the 3d volume with enough overlap in the FOV of the cameras to support 3d triangulation
to change the charuco square size --> freemocap.RunMe(charucoSquareSize=145)
@jonmatthis Here is what I will refer to as recording 00 for you to review: https://drive.google.com/drive/folders/1EQLcPVv6odNit4S9WsM67ezgihsXZrMa?usp=sharing
@jonmatthis Fair warning... something seems wrong with 00's skeleton on the left.
yeah, somethings screwy there -
First check - You sure you're using the most up to date v0.0.54
?
Also - could you download Blender (https://blender.org) and include useBlender=True
in the RunMe?
import freemocap
freemocap.__version__
You can generate the blend file from this recording with -
import freemocap
freemocap.RunMe(sessionID = "sesh_2022-08-23_18_41_08", stage=5, useBlender=True)
(assuming it's still in your freemocap_data
folder)
The hand traces look much cleaner than they should be given how wonky the 3d skeleton is, so it's possible that the data is good but the visualization is somehow broken
Update:
@jonmatthis Here is a recording and blender file produced by the fabulous @rachel-allison
You can find the animation video and the blender file in the repository: https://github.com/BonnenLab/freemocap-balance
sesh_2022-09-19_14_27_50
Let us know what you think.
Oh... and this is @kbonnen accidentally logged in as rachel ... 😄
Looking good! Few notes -
Looks like you forgot to set the charucoSquareSize
value here, so your skeleton is smaller than real (the default value is 39
or something, which is the length of one of the black squares in mm
when printed on 8.5"x11" paper.
The 'leave the charuco board statically in the FOV's of the cameras' strategy is fine enough, but I think it's better to dynamically give each camera a good view of the board - Imagine like you're "painting" the image of each camera one by one (and then be sure that each camera has an overlapping FOV with at least 1-2 other cameras). Kinda like what I'm doing in the first part of this video - https://youtu.be/WW_WpMcbzns
2a. In the pre-alpha, you can use the use_saved_calibration
keyword argument to the RunMe
to use the most recently successful calibration to process future videos, so you can take recordings where you skip the calibration stuff
You can run the resulting npy
file through these ipynb
's to clean the data and convert to pandas
friendly csv
's
You can start trying to play with the alpha
GUI now, if you like! I'd be intersted to hear how it fares in when run on Mac or LIinux: https://github.com/freemocap/freemocap#how-to-create-a-new-freemocap-recording-session
I kinda wish we were having this conversation in a more public place, like in the freemocap
Discord server or Github repo so other people could benefit from it.... Let me know if y'all would be comfortable moving this conversation to one of those places, but no worries if not :)
@jonmatthis Thanks. I think the charuco square thing happened because we have to switch computers to process the data with blender. Two follow-up questions:
@rachel-allison Let's move this convo to the Discord. We can re-post the most recent mp4 and blend file there with Jon's comments above. Then we'll post our updates from this week there as well. I'll see you on Tuesday.
Let's do a few things this week:
Actually @jonmatthis do you have a preference between github and discord?
I don't much preference -
I like when video clips are posted to the Discord just to show other people that folks are using it, but Github is better for conversations and whatnot (things get lost on Discord)
I think I saw you drop the video in #freemocap-clips along with a link to the Github Issue on the freemocap
repo. I think that w as a pretty good balance 👍
(Did you delete that post though? I can't find it)
Also,Github pro-tip - If you use checkboxes for your TODO
lists, the issue will track them as they are checked:
EDIT - It might need to tbe in the main Issue 'description' to track, but check boxes int eh comments are still helpful
see if you can use 3 cameras oriented in portrait mode covering roughly a 1mx1mx2m capture volume :D