FanglinBao / HADAR

This is an LWIR stereo-hyperspectral database to develop HADAR algorithms for thermal navigation. Based on this database, one can develop algorithms for TeX decomposition to generate TeX vision. One can also develop algorithms about object detection, semantic or scene segmentation, optical or scene flow, stereo depth etc. based on TeX vision instead of traditional RGB or thermal vision.
MIT License
166 stars 26 forks source link

About data content #11

Closed HollyYa closed 1 year ago

HollyYa commented 1 year ago

Hello, I am a student. After reading your paper, I have a few questions to ask. Firstly, in the paper, it is mentioned that "The channels in the real-world scene correspond to the 5th ∼ 53rd channels of the synthetic scenes.” What content is stored in the 1-4th channel of the synthesis scene? Secondly,what is the spectral range of the 256 spectral segments of “For the real-world experimental scene, HADAR sensor is a pushbroom hyperspectral imager that can produce 256 spectral bands. The heat cubes have been interpolated into 49 channels to match the channels in synthetic scenes”.

FanglinBao commented 1 year ago

Hello HollyYa,

I've added a readme file to give a broad description of the database. Please check the new OneDrive link in the HADAR Database section. There you can find the channel information. The wavelengths for the 256 spectral bands of the raw experimental data are roughly within 8-14 um (not equally distributed). The exact wavelength values can be found in the TeX code package where I've provided a test example with data and codes.

HollyYa commented 1 year ago

Hello FanglinBao, Thank you for your answer to the previous question. I would also like to ask why does the real dataset of Scene11 not differentiate between left and right, such as' Scene11/GroundTruth/tMap/tMap ' 0002. mat ', but in code processing, the dataset of Scene11 needs to be read separately, such as' Scene11/GroundTrust/tMap/tMap' L_ 0002. mat ' 8AJNY1{P%RT5 2EKIMZ`TEG

8AJNY1{P%RT5 2EKIMZ`TEG }C_$J66`4DU`@1BS )5DYZY

J@}9Q4B7P6C3 3CG2MSHNBH May I ask if the L and R data for Scene11 are the same and equal to the provided data?

FanglinBao commented 1 year ago

Yes, the real-world scene was collected with only one view. We duplicated it into L and R, in consistence with other scenes, for ease of post processing.

HollyYa commented 1 year ago

Hello FanglinBao,For the "matLib_FullDatabase has 30 materials but 49 bands, while matLib_SyntheticScenes has 28 materials but 54 bands” mentioned in the material,I would like to ask why the band numbers of the synthesized data and experimental data are inconsistent?“All synthetic datasets share the same wavenumbers, 720:10:1250, in unit of 1/cm. The experimental dataset has wavenumbers 760:10:1240, corresponding to the 5th:1:53rd bands of the synthetic scenes.” Why do the bands for collecting these two datasets need to be different?

HollyYa commented 1 year ago

And what data are stored in Depth and ResMap respectively in Groundtruth?

FanglinBao commented 1 year ago

We synthesized Scenes 1-10 before we got the experimental Scene 11, and we did not know the experimental spectral response range beforehand. Therefore, we used the common LWIR spectral range in synthetic scenes, i.e., 720-1250 /cm, which corresponds to 8-14 um in wavelength. It turned out later that the experimental spectral response range only covers from 760 to 1240 /cm.

FanglinBao commented 1 year ago

Ground truth Depth is the distance Z in unit of meter. Ground truth ResMap is the residue value defined in Fig.S1a in the HADAR paper. Ground truth ResMap is the minimum physics cost based on ground truth e and T.