Open Wu-Xiang123 opened 1 year ago
Hi there, thanks for your interest in our challenge. Sorry that it's difficult to push a submission through. We're building out a new system for the challenge this year so please surface any issues you run into and we'll try to resolve them asap.
I looked at your submission, and it looks like the issue is that the tutorial is storing the sequence_id in the context_name field here: https://github.com/waymo-research/waymo-open-dataset/blob/1eabddad78858a74b18152f61b89868a6c4ecc59/src/waymo_open_dataset/protos/camera_segmentation_metrics.proto#L25. If you modify your submission so that you store the context_name here instead: https://github.com/waymo-research/waymo-open-dataset/blob/1eabddad78858a74b18152f61b89868a6c4ecc59/src/waymo_open_dataset/dataset.proto#L141, it should hopefully resolve the issue if your submission contains the set of frames listed in https://github.com/waymo-research/waymo-open-dataset/blob/1eabddad78858a74b18152f61b89868a6c4ecc59/tutorial/2d_pvps_validation_frames.txt. We'll fix the tutorial asap. If it helps, you can also package everything into a single binproto, our system should be able to handle this now.
Please note that there is a known issue where there is a divergence between the deeplab and open dataset definition of the classes, so the metrics are currently higher than they should be. We're working with the deeplab team on this, and should have a fix within 24 hours. However, you should be able to test getting your submission through before this.
Thanks for the note on the website, we will update this to be more clear.
Hi there, thanks for your interest in our challenge. Sorry that it's difficult to push a submission through. We're building out a new system for the challenge this year so please surface any issues you run into and we'll try to resolve them asap.
I looked at your submission, and it looks like the issue is that the tutorial is storing the sequence_id in the context_name field here:
. If you modify your submission so that you store the context_name here instead: https://github.com/waymo-research/waymo-open-dataset/blob/1eabddad78858a74b18152f61b89868a6c4ecc59/src/waymo_open_dataset/dataset.proto#L141
, it should hopefully resolve the issue if your submission contains the set of frames listed in https://github.com/waymo-research/waymo-open-dataset/blob/1eabddad78858a74b18152f61b89868a6c4ecc59/tutorial/2d_pvps_validation_frames.txt. We'll fix the tutorial asap. If it helps, you can also package everything into a single binproto, our system should be able to handle this now. Please note that there is a known issue where there is a divergence between the deeplab and open dataset definition of the classes, so the metrics are currently higher than they should be. We're working with the deeplab team on this, and should have a fix within 24 hours. However, you should be able to test getting your submission through before this.
Thanks for the note on the website, we will update this to be more clear.
Great! Change sequence_id to context_name in CameraSegmentationFrame obj works for me.
Really appreciate for your high-quality response. Thanks a lot!
BTW, will the future updated scoring module available for us to eval offline?
Yes, the updated metric is a change that needs to be made to the deeplab2 repo, and will be pushed to their repo once it's done. If you'd like to run it locally before the fix is pushed, it requires a few changes:
unknown
and sdc
: https://github.com/google-research/deeplab2/blob/68b70e8352271c75b1cb4a349cd7f61c145e93f4/data/waymo_constants.py#L70num_classes=len(waymo_meta)
: https://github.com/google-research/deeplab2/blob/68b70e8352271c75b1cb4a349cd7f61c145e93f4/data/dataset.py#L354. Hi there, doubling back here, we've made the updates to the deeplab2 repo, and the server has been updated to reflect this change. Overall, I don't actually think there are significant changes to the metrics, but submissions should be stable moving forward.
Get, thanks a lot!
@alexzzhu Hello~ Sorry to bother you again. Recently I met another submit error against 2d pvps validation set, the website shows "Unknown error, please file a new issue at https://github.com/waymo-research/waymo-open-dataset/issues/new and specify your user_id=wuxiang1871217@gmail.com, submission_id=1682951329500990. ". Any help? Thanks!
Hi, please feel free to bring up any issues you may have, and apologies for the vague error message. It looks like our backend failed trying to decode the CameraSegmentationSubmission
protos from your submission. I can dig into this a bit more, but is there a chance that there was some kind of encoding failure or changes on your end?
It looks like I'm not able to untar your submission locally, perhaps there was an issue when compression the files?
It looks like I'm not able to untar your submission locally, perhaps there was an issue when compression the files?
Thanks a lot for your patience! I checked it again and found there indeed seems to be something wrong with my submit file. I generate and upload it again, now it works fine and I successfully get a score. Thanks again!
@alexzzhu Hello~ Recently I met a problem with submiting a testing result in 2d panoptic segmentation, it shows "Not all groundtruth frames were provided in the predictions." again. The same way to generate a validation submission will success. Any help? Thanks! The submission info is "timestamp=1684201592089604&challenge=VIDEO_PANOPTIC_SEGMENTATION_2D&emailId=91ca6f7d-ac98".
I had a look at your submission. One initial issue is that two run segments are missing from your submission:
14918167237855418464_1420_000_1440_000
10980133015080705026_780_000_800_000
You may need to sync your local repo as we updated 2d_pvps_test_frames.txt last month.
For the 2d panoptic segmentation task (challenge 2023), I submit a whole validation result guided by tutorial_2d_pvps.ipynb, but the waymo website tells me "Not all groundtruth frames were provided in the predictions".
My submission if a zip file, which includes 20 .binproto files generated as format show in tutorial_2d_pvps.ipynb.
Exactly, each .binproto file includes a context related to a .parquet dataset, each context includes one sequence which contains timestamps specified by 2d_pvps_validation_frames.txt. In total, there are 1930 frames, each includes 5 results of different camera view.
However, after submission, the waymo website tells me frames are not enough, "Not all groundtruth frames were provided in the predictions".
Then I check again, superisingly found the waymo website show strange guidance in this page "https://waymo.com/open/challenges/2023/2d-video-panoptic-segmentation/", it tells us submit a file which include results of all TEST data. (but we submit for VALIDATON set.)
More info: user_id=wuxiang1871217@gmail.com, submission_id=1681704875850301. Note, I use a trick to get my submission_id, it equals the num in the submit page url which shows as timestamp.
Any help? It's not easy to make a submission, I have been confused for weeks! It seems the data-codebase and waymo challenge website still being updating and waymo is not ready for challenge 2023?