-
I have tested SpherefaceNet-04 and SpherefaceNet-06 with and without BatchNorm.
Training on WebFace,
SpherefaceNet-04 WITHOUT BN can easily achieve ~98.00% (the paper reports 98.2%),
SpherefaceNe…
-
First of all, thanks for sharing the code.
I have trained a model with accuracy 99.27% following the instructions. However, when I use it in C++ programs, I find there is slight difference between C+…
-
https://github.com/huangyangyu/NoiseFace/blob/0b434a2c0eb664ca2af36c3bc619629fb27dcf3f/layer/noise_tolerant_fr_layer.cpp#L152
hi,can you explain the noise_data layer?
-
-
environment:
(1) retinaface(R50) for face detection and alignment
(2) arcface(r50) for face feature extraction
(3) cosine similarity
I test LFW face pairs using arcface, the same face pairs ha…
xiakj updated
2 years ago
-
Hi guys,
I tested A-Softmax loss on CASIA dataset and it really does well, reaching a ~99% performance on lfw without careful tuning. But when I switch to use the MS-Celeb-1M datasets for training(…
-
Hi, are you training ms-celeb-1m dataset now?
How long did you extract image from ms-celeb-1m.tsv file?
-
## 0. 論文
### タイトル
UniformFace: Learning Deep Equidistributed Representation for Face
Recognition
### リンク
http://openaccess.thecvf.com/content_CVPR_2019/papers/Duan_UniformFace_Learning_Deep_Equ…
-
As far as I know, the FN is actually Batch Normalization applied after final feature layer. I used to do such experiments about one year ago because we usually do mean-subtraction on LFW's training se…
-
## Why
Machine Learning 輪講は最新の技術や論文を追うことで、エンジニアが「技術で解決できること」のレベルをあげていくことを目的にした会です。
prev. #6
## What
話したいことがある人はここにコメントしましょう!
面白いものを見つけた時点でとりあえず話すという宣言だけでもしましょう!