Closed nttstar closed 1 year ago
I executed to unzip: the command always reports zip I/O error: No such file or directory Zip error: File not found or no read permission (iQIYI-VID-FACE.zip)
@olojuwin check sample_config.py
@olojuwin check sample_config.py
Thanks a lot.How to get cfp_fp.bin or agedb_30.bin. I remember there used to be connected, but now I can't find it.
in training dataset
It's very embarrassing,^_^
Hi~ I submitted two feature files, as shown in the figure below, the ROC curve demonstrate that r50 256 rf
model should achieve TPR@FAR1e-4=61%, but the result shown in table is the same as last model. May I ask the is there any rule for naming filename
and description
or anything I missed?
@luzai Hi, we will look into this issue soon, and also we will update a new iqiyi ground-truth.
@luzai Hi, we will look into this issue soon, and also we will update a new iqiyi ground-truth.
Thank you very much for your quick response and kind remind for updating ground-truth!
I am quite sorry because I just find there is a bug in my code. Thus, it is reasonable for the performance keeping the same as the last model, since new method is not applied in fact. After fixing it and resubmitting, there is some improvement on performance now.
In a word, I guess the number in the table is correct and the ROC curve may have some problems.
Hello @nttstar. On the workshops webpage you mention the release of baseline models and corresponding training logs. Any target date for this?
Cheers
for the submit test image feature, the file size is about 3.8GB. How long does it take to upload to the test server from overseas?
It seems to be very slow?
@nttstar Hellow, how can I get the FLOPs of my models?
where to get network y2?
@MinZhaoLin It will output FLOPs when you start training. See common/flops_counter.py
for detail.
@ZHAIXINGZHAIYUE See recognition/sample_config.py
@hengck23 How long did you take? Where're you?
@nttstar @MinZhaoLin
@MinZhaoLin It will output FLOPs when you start training. See
common/flops_counter.py
for detail.
just count conv and fc?
@ZHAIXINGZHAIYUE any else operator you want to use?
@nttstar
@hengck23 How long did you take? Where're you?
I am in singapore. I cannot upload for test image (feature file size is 3.8 Gb) because of connection timeout. for test video, the feature file size is about 500 mb and it takes about 1 hour for me.
@hengck23 My friend from Singapore said he had no problem with submitting. I guess your network provider has some connection issue with AliCloud.
i am starting a discussion slack team at : https://join.slack.com/t/private-5ay1415/shared_invite/enQtNjI4NDA2MjUzNzkzLThlZDIxODBlMjZiMmIzZGVhMDRiNjExZGY2YzE5ZTEyOTczZDUxZjc0OWUyNTQzNDZkMDYxNjNkYWFkODY5OTc
joint the channel "#iccv2019-low-face-rec"
Please feel free to join. Objectives of the slack team:
share some baseline code and results.
discussion possible strategy and papers, etc to achieve best results.
As a start, i am open sourcing by pytorch baseline code and trained model below. Join the slack for more information!
Why I cannot find the iQIYI-VID-FACE.z01, iQIYI-VID-FACE.z02, and iQIYI-VID-FACE.zip files from the iQIYI AI completion platform? It seems that they have changed the name of the data. So how to get the video test data?
@LeonLIU08 I can find it at http://challenge.ai.iqiyi.com/data-cluster
@LeonLIU08 I can find it at http://challenge.ai.iqiyi.com/data-cluster
Thanks for your quick response. The website jumping is quite confusing after the registration.
@nttstar I've got upload error after 100 %, when tried to upload 1.8 GB feature file from Russia. Any suggestion?
@nttstar The submission contains features only, so how you verify the model complexity of the submission?
@nttstar The submission contains features only, so how you verify the model complexity of the submission?
i suggest all team to submit a onnx model to the submission server, then the organizer can make an automatic script to compute the flops and ouput on leaderboard. It can be based on the original count_flops.py of mxnet
@mariolew @hengck23 We will make sure the leader submissions should be reproducible.
@grib0ed0v Try logout and login again
https://ibug.doc.ic.ac.uk/resources/lightweight-face-recognition-challenge-workshop/
"The upper bound of model size is 20MB."
i suppose this is number of parameters and not storage in megabytes? (1 float = 4 bytes)
@hengck23 20MB is the fp32 storage size, and 5M for number of parameters.
@nttstar The submission of feature size is limited to 5GB, which I think is not enough for video features.
@mariolew video feature file should be smaller
@mariolew
you have to combine image_features to video_feature, e.g.
video_feature = average(image_features)
or
video_feature = median(image_features)
or
video_feature = max(image_features), etc ...
@nttstar Would you be sharing the y2 and r100fc baseline pre-trained models to the public? Thanks.
@hengck23 My friend from Singapore said he had no problem with submitting. I guess your network provider has some connection issue with AliCloud.
i now set up AWS EC2 cloud computer to do the upload to the submission server, I am getting only uploading speed of about 50 to 100 kbits/sec for ec2 in singapore and hongkong. This is very slow. (the network limit of my ec2 is 5 Mbit/sec. I use google drive to transfer the file from my PC to the cloud PC. It talks less than a minute to upload and download 3.8GB feature bin file in these case)
Can anyone report there upload speed and their location? I would like to setup cloud PC there.
@hengck23 How do you generate submit bin file?Do you use this repo
gen_image_feature.py
? I don't know how to generate this easier cause I use python3 and Gluon.... @nttstar Or could you please provide a common format for the bin file?
yes, i use code from gen_image_feature.py. you can just use the copied code:
import struct
def write_binary(binary_file, data):
rows, cols = data.shape
with open(binary_file, 'wb') as f:
f.write(struct.pack('4i', rows, cols, cols*4, 5))
f.write(data.data)
def read_binary(binary_file):
with open(binary_file, 'rb') as f:
bytes = f.read()
rows, cols, d0, d1 = struct.unpack_from('4i',bytes)
assert(d0==cols*4)
assert(d1==5)
offset = len(struct.pack('4i', rows, cols, cols*4, 5))
data = np.frombuffer(bytes, dtype=np.float32, offset=offset)
data = data.reshape(rows, cols)
return data
data is an np array with array.shape = (num_of_face_images, feature_dim).
and fyi, mxnet insightface is python3 compatible after small changes (e.g. xrange to range, and change python3 pickle load function to read python2 created pickle files)
@hengck23 Thanks anyway.
@hengck23 Thank U.
Is L2 normalization of embedding vectors required in this challenge? L2 normalization is performed in the ArcFace, but there is another method that L2 normalization of embedding vectors is not performed.
@believeitmyway you need not to submit L2 normalization features,I think the task should use euclidean distance
but not cosine distance
to get similarity, so the L2 normalization is not necessary.
I share a Gluon/Pytorch(need a little change) way to get the submission bin file for image/video(later update) task.
https://gist.github.com/59d312ff89ddcb891a812fc001e15d62
@hengck23 Could you please test it if could get totally same file with gen_image_feature.py
, it should be very easy refactor to Pytorch style. If you are convenient.
@believeitmyway @PistonY Evaluation metric is cosine.
So it’s only support margin based loss...
No, all embedding based approaches should be fine. (Softmax, Triplet Loss, Arcface, etc..)
@nttstar @PistonY Thanks. So distance based loss like contrastive loss and triplet loss (without L2 normalization)is not available in this challenge, right? If we use L2 normalization, L2 distance and cosine is equal. But if we don't use L2 normalization, they are different.
If evaluation metric is simple L2 distance, both of distance based loss and angular based loss is available.
@believeitmyway We should always compare l2 normed embeddings, for all algorithms you have metioned, contrastive loss, triplet loss, arcface. So no problem of cosine evaluation metric.
@nttstar Recent methods perform L2 normalization as you say. But I do not think that it is essential for the future. When targeting a metric space different from the conventional method, L2 normalization may not be essential. But if evaluation metric has already been decided, I will follow it. Thank you.
@believeitmyway Yes L2 distance metric is more general. We will use cosine this time, as we already have many submissions, Thank you for your advice!
@believeitmyway you just need to set embedding layer's bias=False
and pass L2 normalization, it should be ok with cosine distance
Challenge Information:
HomePage, please read carefully about the training and testing rules.
How To Start:
Please check https://github.com/deepinsight/insightface/tree/master/iccv19-challenge for detail.
Any question can be left here.