Open rizar opened 5 years ago
You are correct -- thanks for pointing this out!
For generating the official released CLEVR data, we distributed question generation across multiple machines (using these flags) and merged the results. This means that the heuristics were only applied within the batch processed by each worker, and not enforced across the entire dataset. I suspect that this is the reason for the differing distributions that you are seeing.
Ok, thanks for confirming!
Hi Justin @jcjohnson. I am working on the CLEVR dataset. I am not sure how to decide the right of a given Object as sometimes pixel coordinates don't make sense and sometimes 3D_coords don't make sense.
hi @yonatansverdlov, did you find out which to use? I am also confused...
There must be another discrepancy between
generate_questions.py
and the original script that was used to generate CLEVR. I have noticed that in CLEVR the answer distribution for counting questions is very skewed. For example, for one of the question families I have the following answer counts:{'1': 2658, '0': 2555, '2': 1911, '5': 52, '3': 579, '6': 17, '4': 136, '7': 2, '9': 1}
Here the 6th popular answer is "6" with the count of 17. This could not have happened if the current version of
generate_questions.py
were used, since it has a heuristic that forces all answer to occur at most 5 times as often as the 6th popular answer:https://github.com/facebookresearch/clevr-dataset-gen/blob/master/question_generation/generate_questions.py#L322
The main reason I have created this here is for the record, because it's unclear how this issue can be addressed. But I guess people who are using the code should be made aware.