hello, I am recently working on the SAN algorithm in mmsegmentation.
I don't want to freeze the parameters of CLIPTextEncoder, I want them to take part in the gradient calculation, then I remove self._freeze() and @torch.no_grad() from mmseg/models/text_encoder/clip_text_encoder.py, but there is an error at runtime: "RuntimeError: Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward."
Is the text module parameter set in the underlying code?
hello, I am recently working on the SAN algorithm in mmsegmentation. I don't want to freeze the parameters of CLIPTextEncoder, I want them to take part in the gradient calculation, then I remove self._freeze() and @torch.no_grad() from mmseg/models/text_encoder/clip_text_encoder.py, but there is an error at runtime: "RuntimeError: Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward." Is the text module parameter set in the underlying code?