cedriclmenard / irislandmarks.pytorch

PyTorch implementation of Google's Mediapipe Iris Landmark model. The original code uses TFLite and their mediapipe workflow, which wouldn't work well with my codebase.
Apache License 2.0
65 stars 21 forks source link

Training code #1

Open shreyasashwath opened 3 years ago

shreyasashwath commented 3 years ago

Hi cedriclmenard, Could you please provide the training code for this repo that would be very helpful. Thanks

cedriclmenard commented 3 years ago

Hi @shreyasashwath,

You can take a look at the class definition in irislandmarks.py, especially at the predict and the forward methods. Training can be done as with any PyTorch models if you feed the right data. For a simpler implementation of training, I suggest looking into PyTorch-Lightning.

The goal of this repo is to reproduce the same results using the same weights given by the original authors (Google's Mediapipe), this is why no training is implemented here.

sainisanjay commented 2 years ago

Hi @shreyasashwath,

You can take a look at the class definition in irislandmarks.py, especially at the predict and the forward methods. Training can be done as with any PyTorch models if you feed the right data. For a simpler implementation of training, I suggest looking into PyTorch-Lightning.

The goal of this repo is to reproduce the same results using the same weights given by the original authors (Google's Mediapipe), this is why no training is implemented here.

@cedriclmenard Yes agree training can be done using any PyTorch model with right data. Can we train this model with few key-points like if i have to extend this to face landmarks (98 keypoints)? and which places we should use Batch normalised layer ?

cedriclmenard commented 2 years ago

For face landmarks, I suggest looking at the FaceMesh repo. You can definitely use this model, but you'll have to modify it according to the output size you need.

Batch norm layers, if there were any in the original training implementation by Google Mediapipe, have been folded in the convolution weights, so you can figure where batch norm layers could go given that.