facebookresearch / egolifter

This is the official repository for "EgoLifter Open-world 3D Segmentation for Egocentric Perception, ECCV 2024"
https://egolifter.github.io/
Apache License 2.0
94 stars 7 forks source link

Questions regarding vignetting mask and EgoExo4D #4

Closed SunghwanHong closed 1 month ago

SunghwanHong commented 1 month ago

Hi @georgegu1997 !

Thanks for the making this great work publicly available. I'm about to try out your codes, but I've got some questions before running them.

  1. Is vignetting mask calculated for each image? or you just use one mask that is under assets/ folder.
  2. Should this vignetting mask be used also for EgoExo4D dataset?
  3. Does the dataloader for EgoExo4D dataset similar to ADT and AEA? In other words, can i just simply download the egoexo4d dataset and run the similar preprocessing and run this code to get the results?

Thanks!

georgegu1997 commented 1 month ago

Thanks for your interest in our work!

  1. We only used one set of vignetting masks for all experiments, which were calculated as an average of many images captured by Aria glasses. By this, we did assume that all Aria glasses more or less share the same vignetting effect and it's possible individual Aria glasses will have different vignetting effects. If you need a more accurate vignetting mask for your device, you can use it to capture images towards a fully-lit white wall or take an average of many natural images.
  2. Yes.
  3. Yes, the pre-processing and dataloading should be similar to that of AEA. Please see process_project_aria_3dgs.py and rectify_aria.py for details. They need the VRS file and the MPS SLAM results as input.