Stable-X / StableNormal

[SIGGRAPH Asia 2024 (Journal Track)] StableNormal: Reducing Diffusion Variance for Stable and Sharp Normal
Apache License 2.0
475 stars 23 forks source link

Is it possible to fine-tune StableNormal for different effects? #21

Open FrankEscobar opened 1 month ago

FrankEscobar commented 1 month ago

Hello! I’m curious to know if StableNormal can be fine-tuned or re-trained for other types of effects beyond monocular normal estimation. Similar to how you’ve adapted StableDelight, would it be possible to tailor StableNormal for different tasks or applications? If so, could you provide some guidance or recommendations on the fine-tuning process?

Thank you!

lingtengqiu commented 1 month ago

We find that dataset is of importance for repurposing SD to pix2pix tasks or applications. So you need to collect your training data pairs such as (img-norm, img-depth, etc).

FrankEscobar commented 1 month ago

Yeah, for sure! I've made my own try by using pix2pixHD and the quality is far away from your archieved quality!

lingtengqiu commented 1 month ago

Please, give me some examples about your training data pairs?

FrankEscobar commented 1 month ago

Infortunately I cannot share the data since it belongs to the company I work for but they were more than a thousand pairs of image <-> specular.

Ideally it should be something like training a LORA over your data.

lingtengqiu commented 1 month ago

Do you say, using LORA fine-tune our model on your dataset?

FrankEscobar commented 1 month ago

I was just refering to make a finetunning on your model, it could be creating a new model or a kind of lora over your base model any option could be great.

My question was just about using any other dataset, in my case image <-> specular to somehow retrain your system as you made in stable delight.

By the way how many pairs could be necessary?

FrankEscobar commented 1 month ago

@lingtengqiu do you plan to share a way to retrain that on the future?

lingtengqiu commented 1 month ago

We plan to clean our training codes and release ASAP. Please wait patiently :) :) : )

FrankEscobar commented 1 month ago

We plan to clean our training codes and release ASAP. Please wait patiently :) :) : )

Great thank you! If it is possible could you told us if it will be a full traning or finetuning?