-
Thank you for your work. I have a question, when I used ORB-SLAM3 before, I could use GetTrackedMapPoints()
```
vector System::GetTrackedMapPoints()
{
unique_lock lock(mMutexState);
retu…
-
Hi guys,
I'm watching the tutorial http://www.menpo.org/menpofit/pdm.html on pdm, and the only way to provide points to the pdm seems to be from .pts files with the mio.import_landmark_files(path_t…
-
Hi,
It would be really helpful if one could use a priori chosen landmark points for the Nystroem approximation and other kernel methods. For example, in:
[kernel_approximation.py](https://github.co…
-
Hi,
why do you use the following points for image alignment when doing preprocessing for training with arcface loss?
`arcface_src = np.array([
[38.2946, 51.6963],
[73.5318, 51.5014],
[5…
-
You have mentioned you have used W300 data-set for training . But, that data-set contains only 68 landmarks points for each face. They have not provided head pose data. How did you solve this issue?
-
This tool is awesome, especially for the elastic distortion function. Is is possible to add operations on given keypoints/landmarks, which means the inputs and outputs are images and corresponding poi…
-
Building the [landmark_na rune](https://github.com/hotg-ai/test-runes/tree/master/image/landmark_na) on a macbook air with `rune build Runefile.yml` took `2h 21m 15s`. The build process spent most of …
-
arcface_dst = np.array(
[[38.2946, 51.6963], [73.5318, 51.5014], [56.0252, 71.7366],
[41.5493, 92.3655], [70.7299, 92.2041]],
dtype=np.float32)
is provided in face_align.py for 5 ke…
-
Hi,
I am planning to apply parts of the Deep Fashion project to one of my own and had some questions regarding your demo code.
When running the [landmark detection demo code](https://github.com…
-
We are currently defining it like this
```
trial_definition_specification = dict(trial_definition=dict(name='Landmarker calibration',
…