Added a function convert_to_lora_model, which converts an already existing tf.keras.Model to its LoRA counterpart.
Renamed mark_..._as_trainable to set_..._as_trainable. TF uses set_... as does tfimm already in various places. It seems the two are synonymous, so we should try to standardise terminology as much as possible.
The three ..._as_trainable functions are now called
set_only_lora_layers_trainable is the function in factory.py. This function operates at layer level.
set_only_lora_weights_trainable(train_bias: bool) in the function we are looking for in LoRA layers. This function operates at weight level.
_set_bias_weights_trainable is the auxiliary function. No only in the name, since it simply changes the behaviour of bias weights.
Changed the keys in the LoRA model registry from the name of the class to the class itself, i.e., ConvNeXt instead of "ConvNeXt". Turns out types are hashable...
Added class attribute is_lora_layer: bool = True to the LoRADense layer. Now we can query for any layer via getattr(layer, "is_lora_layer", False) whether it is a LoRA layer or not. It provides for a more uniform querying, regardless of whether we then use the merge_lora_weights or set_only_lora_weights_trainable methods, or anything else.
convert_to_lora_model
, which converts an already existing tf.keras.Model to its LoRA counterpart.mark_..._as_trainable
toset_..._as_trainable
. TF usesset_...
as does tfimm already in various places. It seems the two are synonymous, so we should try to standardise terminology as much as possible...._as_trainable
functions are now calledset_only_lora_layers_trainable
is the function infactory.py
. This function operates at layer level.set_only_lora_weights_trainable(train_bias: bool)
in the function we are looking for in LoRA layers. This function operates at weight level._set_bias_weights_trainable
is the auxiliary function. No only in the name, since it simply changes the behaviour of bias weights.ConvNeXt
instead of"ConvNeXt"
. Turns out types are hashable...is_lora_layer: bool = True
to theLoRADense
layer. Now we can query for any layer viagetattr(layer, "is_lora_layer", False)
whether it is a LoRA layer or not. It provides for a more uniform querying, regardless of whether we then use themerge_lora_weights
orset_only_lora_weights_trainable
methods, or anything else.