Thanks for the making this great work publicly available.
I'm about to try out your codes, but I've got some questions before running them.
Is vignetting mask calculated for each image? or you just use one mask that is under assets/ folder.
Should this vignetting mask be used also for EgoExo4D dataset?
Does the dataloader for EgoExo4D dataset similar to ADT and AEA? In other words, can i just simply download the egoexo4d dataset and run the similar preprocessing and run this code to get the results?
We only used one set of vignetting masks for all experiments, which were calculated as an average of many images captured by Aria glasses. By this, we did assume that all Aria glasses more or less share the same vignetting effect and it's possible individual Aria glasses will have different vignetting effects. If you need a more accurate vignetting mask for your device, you can use it to capture images towards a fully-lit white wall or take an average of many natural images.
Yes.
Yes, the pre-processing and dataloading should be similar to that of AEA. Please see process_project_aria_3dgs.py and rectify_aria.py for details. They need the VRS file and the MPS SLAM results as input.
Hi @georgegu1997 !
Thanks for the making this great work publicly available. I'm about to try out your codes, but I've got some questions before running them.
Thanks!