Open sljlp opened 4 years ago
hi,I changed the backbone from vgg16 into a lighter network ,mobilenet-v2 , and set the input image size to 112x112, and succeeded in basic training . the basic train method is following [https://github.com/guoqiangqi/PFLD].when I use lk, I scaled the image size to 256x256 again(the points coords are scaled according the size too) for higher accuracy, while the size for detection at basic stage is still 112x112(because the detector is trained with input size 112x112) but when I refine the model using lk module, the result became worse than last basic result, i drew the predited points and their optimal flows (back points, fback points and next points)and found nothing wrong.I don't know where the bug is .Could anyone give me some advice on how to deal with it? is it not suitable for the mobilenetv2 backbone?
Hi, I met the samilar problem with you, the loss of the model with SBR can't converge, and the result is worse then those basic result. Did you solve this problem? @likethesky @Celebio @colesbury @pdollar could you give me some advice?The test dataset is my own dataset, which has no labels, and the size of image is 1920*1280. Here is the training process:
hi,I changed the backbone from vgg16 into a lighter network ,mobilenet-v2 , and set the input image size to 112x112, and succeeded in basic training . the basic train method is following [https://github.com/guoqiangqi/PFLD].when I use lk, I scaled the image size to 256x256 again(the points coords are scaled according the size too) for higher accuracy, while the size for detection at basic stage is still 112x112(because the detector is trained with input size 112x112) but when I refine the model using lk module, the result became worse than last basic result, i drew the predited points and their optimal flows (back points, fback points and next points)and found nothing wrong.I don't know where the bug is .Could anyone give me some advice on how to deal with it? is it not suitable for the mobilenetv2 backbone?
Hi, I met the samilar problem with you, the loss of the model with SBR can't converge, and the result is worse then those basic result. Did you solve this problem? @likethesky @Celebio @colesbury @pdollar could you give me some advice?The test dataset is my own dataset, which has no labels, and the size of image is 1920*1280. Here is the training process:
The basic model is CPM. And it took 3 days for the model with SBR to arrive at epoch 17...is the normal? @likethesky @Celebio @colesbury @pdollar
hi,I changed the backbone from vgg16 into a lighter network ,mobilenet-v2 , and set the input image size to 112x112, and succeeded in basic training . the basic train method is following [https://github.com/guoqiangqi/PFLD].when I use lk, I scaled the image size to 256x256 again(the points coords are scaled according the size too) for higher accuracy, while the size for detection at basic stage is still 112x112(because the detector is trained with input size 112x112) but when I refine the model using lk module, the result became worse than last basic result, i drew the predited points and their optimal flows (back points, fback points and next points)and found nothing wrong.I don't know where the bug is .Could anyone give me some advice on how to deal with it? is it not suitable for the mobilenetv2 backbone?
Hi, I met the samilar problem with you, the loss of the model with SBR can't converge, and the result is worse then those basic result. Did you solve this problem? @likethesky @Celebio @colesbury @pdollar could you give me some advice?The test dataset is my own dataset, which has no labels, and the size of image is 1920*1280. Here is the training process:
The basic model is CPM. And it took 3 days for the model with SBR to arrive at epoch 17...is the normal? @likethesky @Celebio @colesbury @pdollar
hello,have you addressed the problem?
hi,I changed the backbone from vgg16 into a lighter network ,mobilenet-v2 , and set the input image size to 112x112, and succeeded in basic training . the basic train method is following [https://github.com/guoqiangqi/PFLD].when I use lk, I scaled the image size to 256x256 again(the points coords are scaled according the size too) for higher accuracy, while the size for detection at basic stage is still 112x112(because the detector is trained with input size 112x112) but when I refine the model using lk module, the result became worse than last basic result, i drew the predited points and their optimal flows (back points, fback points and next points)and found nothing wrong.I don't know where the bug is .Could anyone give me some advice on how to deal with it? is it not suitable for the mobilenetv2 backbone?