EtPan / MSF-Diff

From the abundance perspective: Multi-modal scene fusion-based hyperspectral image synthesis
MIT License
4 stars 1 forks source link

fusion process #1

Closed BoYangXDU closed 4 months ago

BoYangXDU commented 4 months ago

Hi, erting, it's a great work and a really interesting idea! And I'm curious about the fusion process of your CVPR 2024 work (not release the code, similar to this one) about how to generate one synthesis abundance map with two input abundance maps (HSI and RGB). If it is possible, please tell me the workflow. Great thanks. :)

EtPan commented 4 months ago

Thanks for your attention and interest!

After unmixing, we can acquire abundance maps from HSI and RGB. Notably, due to the assumption of sharing endmembers, these abundance maps from these two different sources hold the same shape, which means there is a common low-dimensional abundance space for fusing these multi-source data. Hence, during the training of the generative models, we simply input all these abundance maps together. Subsequently, we generate a series of synthetic abundance maps using the trained generative AI models.

I hope this clarifies the workflow for you. The code of UBF (CVPR 2024) will be released soon. If you have any further questions or need more details, feel free to ask!

BoYangXDU commented 4 months ago

Thanks erting, it helps a lot. Great thanks.