Closed nttstar closed 1 year ago
Has anyone validated the flops of r100? I just got 12.1GFLOPS.
@nttstar would it be possible to upload baselines in Dropbox? Thanks.
Hi! My training speed is about 530 samples/sec. However, the speed recorded in the log is about 700 samples/sec. How can I speed up my training?
Thank you for interesting competition!
(1) All participants must use this dataset for training without any modification (e.g. re-alignment or changing image size are both prohibited).
I'd like to confirm the above rule. It means that ALL data augmentation including random crop or contrast change in training are prohibited, right? How about normalization?
@yu4u The alignment method should be kept, no change. But color jittering is allowed. What do you mean by normalization?
@nttstar Thank you for immediate reply!
But color jittering is allowed
Oh, I see. Sorry, I meant standardization in terms of mean and variance by normalization. This of course should be allowed as color jittering is allowed (and seems to be done in reference code)
Changing input image size is prohibited?
@PistonY Training and testing images should not be changed like re-alignment or something else. In training time, data augmentation methods are allowed(random flipping, color jittering). In testing time, only flipping test is allowed.
@nttstar
Assuming we do distillation or domain shift. Are the following allowed?
1) train a teacher network using train. Treat test data as unlabeled, and use teacher to distill 'train+test' to student network.
2) Treat test data as unlabeled target domain and train as labeled source domain. Learn features that minimize "feature difference" of source and target domain
@hengck23 (1) All participants must use this dataset for training without any modification (e.g. re-alignment or changing image size are both prohibited). (2) Any external dataset is not allowed. I don't think it's allowed.
@hengck23 Test data is not allowed to be used in any training process.
@nttstar In this repo, FLOPS are counted as a base of 1000, not 1024, so, there might be some models of 1.03G(base 1000) FLOPS, but it is not really 1.03G(base 1024).
@nttstar hi, how can i get a validation dataset? Splitting from the training datasets or use lfw datasets?
@mariolew Not a big problem, we will validate all top submissions after ddl.
@cbstyle LFW, CFP, AgeDB in training dataset.
@nttstar Why it has to multiply by 2 in the calculation of FLOPs?
@MrGF Same as opencv dnn
Hi, I have successfully submitted one result for "deepglint-light" to the server, but I still cannot get any result shown from the webpage. How long it will take to provide the performance?
We have created a discussion group for the competition and welcome everyone to join us.
Hi! I use my model to extract features and submit it on the test server. The acc is 0. However, my model's performance on LFW is normal. I have also tried to submit the features to the official trillionpair's website and the result is also correct. I can not figure out the possible reason for that. My file is in the binary format and the data is 1,862,120 * 256 in float32.
@jingyi1997 I'm not quite sure about your question. But many others can get correct score by 256 or 512 embeddings.
@nttstar Can I train models with the IQIYI_DIV_TRAIN for the video test?
@cham-3 no
The rules says Modification (e.g. re-alignment or resize) on test images is not allowed
. May I ask is it permitted to use deconvolution or upsampling to increase the resolution of the features or even the input images? Upsampling seems to be an essential part of high resolution net. Or is there some restrictions on how to use this operations?
@luzai Do not resize on images. You can use upsampling or deconvolution but make sure to add the FLOPs of deconvolution operator which is not contained in current FLOPs counter.
For PyTorch users, you can use flops-counter.
However note that it outputs MACs not FLOPs - so you need to have model less than 500 MMac
DEL
@nttstar where is video_filelist.txt?
@xizi in testdata-image.zip
@nttstar ok.
We have created a wechat group for Chinese discussion: Click for QR code: https://github.com/deepinsight/insightface/blob/master/resources/lfr19_wechat1.jpg
Any idea for where we can build a chat group for English?
@nttstar Maybe whatsapp?
Slack group for English: https://join.slack.com/t/insightface/shared_invite/enQtNjU0NDk2MjYyMTMzLTIzNDEwNmIxMjU5OGYzYzFhMjlkNjlhMTBkNWFiNjU4MTVhNTgzYjQ5ZTZiMGM3MzUyNzQ3OTBhZTg3MzM5M2I (in #lfr2019 channel)
@grib0ed0v Thanks, but it's a little bit complex for me to download whatsapp app..
My uploads are terminating after some time saying Upload error. Please refresh and try again
. Any idea why this is happening?
@sarathknv Hi, sorry for the inconvenience. Have you tried again? Where are you(country and city)?
@sarathknv You can try to logout and re-login before submitting the results.
Is there a recommended way to measure FLOPs in tensorflow? I saw that your sample code is for mxnet. Is that equivalent of
opts = tf.profiler.ProfileOptionBuilder.float_operation()
flops = tf.profiler.profile(graph, run_meta=run_meta, cmd='op', options=opts)
@nttstar Hi, I have a question. How do you make sure the participants strictly follow the rules such as no extra data and no modification on the data? You've mentioned above that you'll make sure the leading results are reproducible, so how to do this? Should the participants upload model and training code?
@gehaocool We will make sure about this and let you all know about it. No worries.
@nttstar Hi, is it allowed to use less training data than the full dataset? e.g., use only 50% of the training data. Thank you!
Hi @nttstar, I have several questions.
How can we obtain raw 112x112 face images extracted by RetinaFace? I'm using Tensorflow to perform training and validation, so I need to access to raw images.
As @alphanerdj asked, is there standard FLOPs calculation function for Tensorflow (which is similar to current MXNet-based function)
Thank you.
@vitoralbiero It is allowed. But you can not use external datasets/models to determine the subset selection.
@caocuong0306
https://github.com/deepinsight/insightface/blob/master/recognition/data/rec2image.py
to unpack images from rec@nttstar ,
Can you provided direct access to raw images for validation purpose?
Thank you.
@caocuong0306 You may refer to the 1:1 verification code to see how it be loaded. Then you will know how to unpack it.
@nttstar Is flops-counter approved by official for calculating FLOPs value which is just half of Mac value?
@paulcx Please refer to our FLOPs counter.
We have updated the ground-truth of glint dataset and all incoming submission will get the updated accuracy. We will also make update on top existing submissions soon.
Challenge Information:
HomePage, please read carefully about the training and testing rules.
How To Start:
Please check https://github.com/deepinsight/insightface/tree/master/iccv19-challenge for detail.
Any question can be left here.