This PR adds a train parameter lora_target_modules that allows users to configure target modules via the API. The default target modules remain consistent with the target modules we've been using. However, we do not currently restrict target modules in any sense, so it is up to the user to specify correct module names as well as anticipate whether or not their inference engine is compatible with the target modules they've specified.
As such, this feature does not provide any safety or support for users and it is expected that they know what they're doing. Future implementations should probably build in some safety rails for users, but for now I think this is sufficient for experimentation.
general thought - now that this functionality exists, what would it look like to add a few quick tests for it? don't need to do it now (i.e. not blocking) but I think we should start doing something like it soon.
This PR adds a train parameter
lora_target_modules
that allows users to configure target modules via the API. The default target modules remain consistent with the target modules we've been using. However, we do not currently restrict target modules in any sense, so it is up to the user to specify correct module names as well as anticipate whether or not their inference engine is compatible with the target modules they've specified.As such, this feature does not provide any safety or support for users and it is expected that they know what they're doing. Future implementations should probably build in some safety rails for users, but for now I think this is sufficient for experimentation.