-
Can you please give me more explanation about preparing data for training the model from scratch?
I want to use VGGFace2 for the first step. Do I need to generate landmarks for every image and save t…
Ned09 updated
6 months ago
-
I opened this issue kishikawakatsumi/swift-power-assert#436 about some failing unit tests, and it turns out to be platform dependant. From that thread:
> iOS 17, Sonoma and Linux
>
> ```json
> …
-
I've tried to read some of the code in this repo. The `MMD_SA.js` seems responsible for sending the VMC data and the data is read from the VRM model. So I guess the whole pipeline of ths app is:
1. G…
-
Requested to load SDXL
Loading 1 new model
loading in lowvram mode 803.7325925827026
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:26
-
I want to get landmarks including pupil points. Can this project be trained to get landmark meeting my need if I have data including more landmarks? What should I do?
-
Hi,
Great work. I was wondering how do I compare the base_footprint trajectory with camera_link trajectory to have a benchmark comparison on my SLAM trajectory output and robot followed trajectory.
-
Is it necessary to have landmarks when training new data?
-
Hello fellas, I read the paper and trying to figure out why I have to use openface to do some landmark detections on my data. I cannot find any landmark-detecting part in the architecture of DINet.
…
-
currently there are four different strategies when problems with the landmark file occur
- File not found -> end with error
- Landmark not found -> Warn and ignore the landmark
- no landmarks in fi…
-
The first 17 points are the jaw of the dlib 68 landmarks.
the code in lsfm/data/__init__.py
LANDMARK_MASK[:18] = False
may be changed to
LANDMARK_MASK[:17] = False
I'm not sure.