Xiangtaokong / MiOIR

Towards Effective Multiple-in-One Image Restoration: A Sequential and Prompt Learning Strategy
63 stars 2 forks source link

How are the explicit prompts stored in "data/type7.npy" learned? #3

Closed Yaziwel closed 6 months ago

Yaziwel commented 8 months ago

It's a great work. However, I am confused about the learning approach of the explicit prompt.

Xiangtaokong commented 8 months ago

Thank you for your attention! In practice, data/type7.npy is randomly generated vector. During the training process, they are fixed, equivalent to a sign that represents a kind of degeneration. Using an H*W vector here is really just to align with the extractor, using a fixed flag should have the same effect.

"explicit prompt" depends on degradation type which is given by the classifier (or directly given by user). Depend on that type, pick a H*W vector of type7.npy as the prompt.

Yaziwel commented 8 months ago

Thank you for your attention! In practice, data/type7.npy is randomly generated vector. During the training process, they are fixed, equivalent to a sign that represents a kind of degeneration. Using an H*W vector here is really just to align with the extractor, using a fixed flag should have the same effect.

"explicit prompt" depends on degradation type which is given by the classifier (or directly given by user). Depend on that type, pick a H*W vector of type7.npy as the prompt.

Thanks for your reply. If the explicit prompts stored in " data/type7.npy" are fixed, in the testing time, those 3-layer CNNs and corresponding FC layers can be discarded after transforming the explicit prompts into "latent prompts". I mean in the testing time, we only need to switch those latent codes after FC layers accordingly.

Xiangtaokong commented 8 months ago

Thank you for your attention! In practice, data/type7.npy is randomly generated vector. During the training process, they are fixed, equivalent to a sign that represents a kind of degeneration. Using an HW vector here is really just to align with the extractor, using a fixed flag should have the same effect. "explicit prompt" depends on degradation type which is given by the classifier (or directly given by user). Depend on that type, pick a HW vector of type7.npy as the prompt.

Thanks for your reply. If the explicit prompts stored in " data/type7.npy" are fixed, in the testing time, those 3-layer CNNs and corresponding FC layers can be discarded after transforming the explicit prompts into "latent prompts". I mean in the testing time, we only need to switch those latent codes after FC layers accordingly.

Yes, detailed codes are in line 939 to 960 of Ir_model.py. I switch the input directly, switching the results of Fext() is also ok.