Closed Heroism502 closed 5 years ago
1,You can follow https://github.com/D-X-Y/landmark-detection/tree/master/SAN#evaluation-on-the-single-image to evaluate SAN on CPU. We did not test its speed on CPU or edge devices. Besides, we only use ResNet-152 to extract style features during training, and do not use it during inference.
2, You can replace it with ResNet-50 or ResNet-34, the performance should be similar.
3, Yes, we use the original 68-points annotation. We did not try to train SAN on the joint of AFLW and 300W.
Best regards
您好: 我按照Evaluate on 300-W or AFLW 进行测试,测试过程中出现如下问题,一直无法解决,请问能否帮忙解答下,谢谢。
------------------ 原始邮件 ------------------ 发件人: "D-X-Y"<notifications@github.com>; 发送时间: 2019年11月7日(星期四) 下午5:50 收件人: "D-X-Y/landmark-detection"<landmark-detection@noreply.github.com>; 抄送: "735944309"<735944309@qq.com>;"Author"<author@noreply.github.com>; 主题: Re: [D-X-Y/landmark-detection] SAN (#50)
1,You can follow https://github.com/D-X-Y/landmark-detection/tree/master/SAN#evaluation-on-the-single-image to evaluate SAN on CPU. We did not test its speed on CPU or edge devices. Besides, we only use ResNet-152 to extract style features during training, and do not use it during inference.
2, You can replace it with ResNet-50 or ResNet-34, the performance should be similar.
3, Yes, we use the original 68-points annotation. We did not try to train SAN on the joint of AFLW and 300W.
Best regarrds
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
@xieliang11111 请问是什么问题么?我是能看到 符号,看不到你描述的问题
我这边在用那个评估方法在300w上面评估的时候,应该是加载你的checkpoint_49,挂在torch的 serializatio.py里面,显示unpicklingerror:NEWOBJ class argument isn't a type object。应该是反序列化加载的时候报错。
---Original--- From: "D-X-Y"<notifications@github.com> Date: Mon, Nov 18, 2019 19:37 PM To: "D-X-Y/landmark-detection"<landmark-detection@noreply.github.com>; Cc: "Rick_LX"<735944309@qq.com>;"Mention"<mention@noreply.github.com>; Subject: Re: [D-X-Y/landmark-detection] SAN (#50)
@xieliang11111 请问是什么问题么?我是能看到 符号,看不到你描述的问题
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
请问你的(PyThon, Pytorch)环境是?如果和我save的环境不一致,是有可能出现这种错误。
Python是3.6的,但是pytouch版本比较低,我升级下试试,谢谢,请问你有微信,可以加个微信吗,因为正在评估关键点最近几年的方法,在san这里还有一些问题没弄明白,想请教下,谢谢。
---Original--- From: "D-X-Y"<notifications@github.com> Date: Mon, Nov 18, 2019 19:50 PM To: "D-X-Y/landmark-detection"<landmark-detection@noreply.github.com>; Cc: "Rick_LX"<735944309@qq.com>;"Mention"<mention@noreply.github.com>; Subject: Re: [D-X-Y/landmark-detection] SAN (#50)
请问你的(PyThon, Pytorch)环境是?如果和我save的环境不一致,是有可能出现这种错误。
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
你好: 有两个问题想请教下,望不吝赐教,谢谢! (1)请问下我这边想新增我自己的数据集,主要是案管场景的数据集,这些训练数据主要是案管平板采集的,数据差异性很大。 所以假如用你的那四种风格的数据做微调训练,用我自己的数据需要怎么样获取你这四种风格的数据集,你是用ps生成的还是有生成代码? (2)初步训练过你的代码,并修改过相关参数,但是距离您这边公布的结果还有差距,请问调参有么有特殊的技巧,可否指点下,谢谢。
---Original--- From: "D-X-Y"<notifications@github.com> Date: Mon, Nov 18, 2019 19:50 PM To: "D-X-Y/landmark-detection"<landmark-detection@noreply.github.com>; Cc: "Rick_LX"<735944309@qq.com>;"Mention"<mention@noreply.github.com>; Subject: Re: [D-X-Y/landmark-detection] SAN (#50)
请问你的(PyThon, Pytorch)环境是?如果和我save的环境不一致,是有可能出现这种错误。
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
(1) I use PS instead of codes to generate images with different styles. (2) I can use the SAN codes to reproduce my results, also the full training log has been provided, where all hyper-parameters are printed in the log. What are your reproduced results? What is the difference?
你好: (1)请问你这边用ps生成数据,假如我们需要新增数据,你那四种风格直接可以在上面找到吗? (2)我直接用的你的代码,用compare工具对比了下参数,基本一致,只有在网络的Relu(inplace)部分,你这边原始的是Relu(inplace),我这边是 Relu(inplace=True),这里应该不影响测试结果,只影响内存申请。但是最终在300w上测试结果只有:common=6.08,challenge=8.89,fullset=6.63, 这个与你log上的common=3.32,challenge=6.58,fullset=3.96差距还是很远。请问还有什么影响因素吗?
------------------ 原始邮件 ------------------ 发件人: "D-X-Y"<notifications@github.com>; 发送时间: 2019年12月2日(星期一) 晚上7:27 收件人: "D-X-Y/landmark-detection"<landmark-detection@noreply.github.com>; 抄送: "735944309"<735944309@qq.com>;"Mention"<mention@noreply.github.com>; 主题: Re: [D-X-Y/landmark-detection] SAN (#50)
(1) I use PS instead of codes to generate images with different styles. (2) I can use the SAN codes to reproduce my results, also the full training log has been provided, where all hyper-parameters are printed in the log. What are your reproduced results? What is the difference?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
(1) sorry for that, the original PS template is lost. You can try other methods to generate images with different styles. (2) It could be caused by (1) wrong GAN training or (2) wrong detection model training. You can compare our log with your training logs. In addition, the original codes are run in Pytorch 0.4. If you are using the recent Pytorch version such as 1.3.1, many codes should be modified accordingly.
你好:还有如下问题 (1)假如没有ps模板,我直接用我自己的数据去训练rennet并聚类,这样是否有意义,以前你这边有尝试直接用训练数据做聚类吗,或者做相关消融实验。 (2)原来我是直接用你共享的cycle-gan 文件夹下面训练的风格生成模型,不知道我自己训练和你的模型差距大不大,假如只是做4分类,我觉得应该影响不是很大,所以就没训练cyclegan。至于检测模型,我这边刚开始训练,还是用的你的数据和检测框。 (3)还想咨询下你这边有没有在别的数据集上测试过,对侧脸和五官部位的点,鲁棒性怎么样?
------------------ 原始邮件 ------------------ 发件人: "D-X-Y"<notifications@github.com>; 发送时间: 2019年12月3日(星期二) 中午11:21 收件人: "D-X-Y/landmark-detection"<landmark-detection@noreply.github.com>; 抄送: "735944309"<735944309@qq.com>;"Mention"<mention@noreply.github.com>; 主题: Re: [D-X-Y/landmark-detection] SAN (#50)
(1) sorry for that, the original PS template is lost. You can try other methods to generate images with different styles. (2) It could be caused by (1) wrong GAN training or (2) wrong detection model training. You can compare our log with your training logs. In addition, the original codes are run in Pytorch 0.4. If you are using the recent Pytorch version such as 1.3.1, many codes should be modified accordingly.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
(1) In my opinion, if your generated styles are diverse, it makes senses. Yes, I tried on original images, and it still improves a little. (3) I only tried on 300W and AFLW. I did not test its robustness on profile faces. I have to say that if you are pursuing state-of-the-art accuracy, you can try some very recent methods. This work was done two years ago, and the accuracy is inferior to the current SOTA methods.
SAN 问题描述: 1、请问下作者有没有测试过SAN的模型( ResNet-152)在CPU上或者在移动端的耗时? 2、请问下这个残差网络是否可以替换浅层一点的网络,性能会不会下降,有没有做过相关尝试? 3、请问下这个300w增强的数据还是原来68点的标注吗,AFLW和300w有没有放在一起训练过?