In chapter 4 Output Encoding part of the paper, it mentioned the method introduces a set of learnable parameters \theta_0 and uses the hypernetwork predictions as additive changes.
Does \theta_0 correspond to the bias of the output layer of the FCN, which is also called independent weights in the class functions?
Hi @JJGO ,
thanks a lot for sharing the code.
In chapter 4 Output Encoding part of the paper, it mentioned the method introduces a set of learnable parameters \theta_0 and uses the hypernetwork predictions as additive changes.
Does \theta_0 correspond to the bias of the output layer of the FCN, which is also called independent weights in the class functions?
Thank you!