Hello, I was working on the LaPa dataset part of your landmark detection model. You shared the pretrained model of LaPa dataset but preprocess part is empty which is important for training. I know that preprocess is cropping the faces from dataset images and transforming them into 256x256 size. However, LaPa dataset does not contain face bounding coordinates unlike other datasets. How did you train your model without cropping images? Or did you manually select the face box coordinates?
Hello, I was working on the LaPa dataset part of your landmark detection model. You shared the pretrained model of LaPa dataset but preprocess part is empty which is important for training. I know that preprocess is cropping the faces from dataset images and transforming them into 256x256 size. However, LaPa dataset does not contain face bounding coordinates unlike other datasets. How did you train your model without cropping images? Or did you manually select the face box coordinates?