-
- [ ] currently we hard-code 75 m for the max distance of a landmark to the respective edge when building data. on top of that, we might want to let the user decide the cut-off distance where we use l…
-
I have two datasets, One is a landmark labeled dataset and the other is a no labeled video dataset. Can I use supervision-by-registration to train my own dataset?
I try each batch containing one land…
-
![figure_1_68](https://cloud.githubusercontent.com/assets/122117/23169789/0fef26a2-f81b-11e6-9a0c-8b88085af734.jpg)
-
Currently, every time a user pans or zooms the map, the new position and zoom are set into the shared model and eventually saved to file, but the UI doesn't react to these changes. It seems that this …
-
got prompt
'🔥 - 11 Nodes not included in prompt but is activated'
codeformer 24.0 video/h265-mp4
video h265-mp4
{'enhancer': 'codeformer', 'frame_enhancer': 'real_esrgan_x2', 'face_mask_padding_le…
-
=> loaded train set, 61161 images were found
Mean: 0.0000, 0.0000, 0.0000
Std: 0.0000, 0.0000, 0.0000
=> Epoch: 1 | LR 0.00025000
Not use all dataset.
Why is 0.0000?
-
### Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
No
### OS Platform and Distribution
Ubuntu
### MediaPipe Tasks SDK version
Holistic
### Task nam…
-
**Description**
I am testing sending data received as output from one model as input to my python backend to post process (I will eventually do an ensemble later)
The problem I am having is that t…
-
Hi, my issue is the following:
I'm inserting as input to the Neural Network node a date with format NNData:
```
body_time = time.monotonic()
frame_nn = frame_nn / 255.
nn_data = dai.NNData()
n…
-
Can you please give me more explanation about preparing data for training the model from scratch?
I want to use VGGFace2 for the first step. Do I need to generate landmarks for every image and save t…
Ned09 updated
6 months ago