Open Ultraman6 opened 1 month ago
Hi,
For the training checkpoints, you can read in the readme "limitation" part. It is explained why I chose to save at the end. For the loading of weights, I am freezing all weights of SAM (loading a SAM model is enough). Therefore I only need to load the LoRA checkpoint.
Hi, I ran through your program. Why is it that only the lora weights are saved at the end of the fine tuning, the promt encoder and mask decoder downstream of sam are HOT parameters, don't they need to be saved? In your inference_eval.py also only the lora parameters are loaded, not considering the parameters downstream of sam at all!