Cloud-CV / EvalAI

:cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
https://eval.ai
Other
1.77k stars 786 forks source link

Remove Stderr file for the successful submissions #1175

Closed deshraj closed 7 years ago

deshraj commented 7 years ago

Observed behavior

Currently when there is a successful submission, then also a stderr_file is created for that submission which leads to the creation of a blank file that is of no use and adds a small overhead in processing the successful submissions.

Expected behavior

When there is a successful submission, then stderr_file field should be set None instead of creating a blank file.

the-dagger commented 7 years ago

@deshraj Can I work on this? Thanks!

the-dagger commented 7 years ago

@RishabhJain2018 @deshraj Is there any zip which I can use to mock a successful and failed submission?

taranjeet commented 7 years ago

I guess when running the seed command, a challenge will be loaded. This challenge contains an evaluation script which will compare a user annotation file(yours) to have integers from 1-10. You can use this to mock a successful submission. Make sure that your rabbitmq is running.

To mock failed submission, presently we don't have any such evaluation script which will throw an error. What you can do is edit main.py of evaluation_script in examples folder and upload it again to form a new challenge. To manually raise an error, you can write 1/0 or something erroneous statement which will throw an error.

We will update here if we plan to add evaluation script which when run throws an error so that stderr output can be checked.

@the-dagger

guyandtheworld commented 7 years ago

@deshraj Is this solved?

RishabhJain2018 commented 7 years ago

No, this isn't solved yet.

guyandtheworld commented 7 years ago

Okay, Ill come back to this after I do Issue - 864.

Ram81 commented 7 years ago

@deshraj @RishabhJain2018 can I take this one?

deshraj commented 7 years ago

@Ram81: sure. Assigning it to you. :)

Ram81 commented 7 years ago

@deshraj Sorry for being late, I was busy with the #1139. I'll start with this right away.

guyandtheworld commented 7 years ago

@Ram81 Are you working on this? :smile:

Ram81 commented 7 years ago

@isht3 yeah I will be doing this 😃

Ram81 commented 7 years ago

@deshraj whenever a submission is made and if at the end if status of submission is "submitted/accpeted" (i.e. if the serializer.is_valid() is true in jobs/views.py - challenge_submission) then I set the stderr file to none, right? So i have to make changes in save function of the model, right?

deshraj commented 7 years ago

I am not sure if I am following you correctly here. Just to explain the deliverables here, if the submission state is finished then no stderr file should be generated since the submission has been successfully executed and there is no error.

Please let me know if I am not clear here.

Ram81 commented 7 years ago

@deshraj understood

Ram81 commented 7 years ago

@deshraj when I made a submission it didn't generate any stderr file in submission folder submission

deshraj commented 7 years ago

The screenshot that you have shared above seems fine to me but I am talking about the finished state. When a submission gets the status finished, then there should be no stderr file generated. Basically, you have to add a check here https://github.com/Cloud-CV/EvalAI/blob/master/scripts/workers/submission_worker.py#L401 which checks if the submission status is finished or not and based on that you have to create the std_err file.

Please let me know if this is not clear.

Ram81 commented 7 years ago

@deshraj can you share a sucessful submission file for testing the changes made?

deshraj commented 7 years ago

Sorry @Ram81, I missed this somehow. Can you please tell me the name of the challenge for which you want to do the submission?

Ram81 commented 7 years ago

@deshraj https://github.com/Cloud-CV/EvalAI-Examples/tree/master/vqa_challenge vqa challenge

deshraj commented 7 years ago

@Ram81 please don't use VQA challenge for debugging purposes. Please use the default challenge that we create using the command python manage.py seed and use any file to submit as a submission file.

deshraj commented 7 years ago

You could also use https://github.com/Cloud-CV/EvalAI/blob/master/examples/example1/test_annotation.txt as a submission file.

guyandtheworld commented 7 years ago

@Ram81 Are you using the Jasmine challenge to debug?

deshraj commented 7 years ago

@isht3 FYI: the name of the challenge can be anything since it is generated randomly. Anyways, the name of the challenge doesn't matter. The submission file that I shared will work with any of the challenge generated using python manage.py seed script.

Ram81 commented 7 years ago

@deshraj I will use test_annotation file or try using default challenge

guyandtheworld commented 7 years ago

@deshraj The problem was that seed command was not creating a default challenge for me. All the migrations were done but the challenge is not coming up.

Ram81 commented 7 years ago

@deshraj Same happens with me when I seed there is no default challenge created

guyandtheworld commented 7 years ago

@Ram81 You're getting the migration error?

deshraj commented 7 years ago

Interesting. Let me try on my local machine and then I will get back to you.

guyandtheworld commented 7 years ago

@Ram81 What version of Python is installed on your Virtualenv?

Ram81 commented 7 years ago

@isht3 Python 2.7 & I didn't get any migration error

guyandtheworld commented 7 years ago

@Ram81 What do you get when you run the seed command?

Ram81 commented 7 years ago

@isht3 we can talk on gitter let's discuss details about only issues here