There's already been some users who have asked for much deeper versions of the mock catalogues so I'm wondering if the best route to take would be to generate the deepest possible mock catalogue and then create the subsurveys from there?
Even if we don't want to make the deepest possible version of the mock catalgoue it would probably be a good strategy to generate the full WAVES-Deep cut over the enitre volume, and then build the DEEP, WIDE, and DDF fields from that. It would make sense to just scrape the hdf5 files once and then build the catalogues that we need using pandas.
There's already been some users who have asked for much deeper versions of the mock catalogues so I'm wondering if the best route to take would be to generate the deepest possible mock catalogue and then create the subsurveys from there?
Even if we don't want to make the deepest possible version of the mock catalgoue it would probably be a good strategy to generate the full WAVES-Deep cut over the enitre volume, and then build the DEEP, WIDE, and DDF fields from that. It would make sense to just scrape the hdf5 files once and then build the catalogues that we need using pandas.
Thoughts would be appreciated.