-
Thank you for your contributions.I have a question which I hope you can help me to answer
The dual_channel_image.shape I saved with sitk is (275, 386, 386, 386, 2), why the shape I read…
-
Hello, what is the training data set? Where can I download it?
For example, the default `'../../data/landmarks/landmarks_full_val'` in the code.
-
Nice Work! Thank you for opensourcing! 🙂
I have noticed that the face movements is quite strange in the mesh sequence output by audio2mesh model. For your information, you can see this video to see t…
-
## Introduction
Hello. I am pleased to share some brainstorming towards advancing the state of the art with respect to educational exercises and activities, e.g., homework, quiz, and exam items, se…
-
Hi, I have downloaded the data from the source you provided, and it seems like the h5 file in the test data does not contain the face_patch key. I tried to open the h5 file and it only has key landmar…
-
### Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
Yes
### OS Platform and Distribution
macOS 12.6
### MediaPipe Tasks SDK version
npm pac…
-
Create datasets that can be used for consistent DeepSSM benchmarking.
Training data must include:
- Original and groomed images
- Original and groomed shapes (meshes or binary segmentations)
- …
-
Hello,
I have been using the YOLO face detection dataset and came across an interesting observation regarding the landmark confidence scores.
I noticed that the confidence scores for landmarks are m…
-
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
1 # Blend the ID and expression …
-
Sorry to bother you. Right now I am using this drift project to estimate my biped robot. But I found there are many delays of my estimated orientation and COM position in world frame. What can I do to…