Closed anthonyduong9 closed 2 weeks ago
Some minor comments but core logic looks great! I'll try training a JumpReLU SAE to make sure this works.
It feels odd that the
l1_coefficient
is used for the JumpReLU sparsity loss coefficient, and that it gets logged asl1_loss
. Maybe in a follow-up PR we should renamel1_coefficient
tosparsity_coefficient
instead? I added a PR #357 which lets the forward pass log any losses it wants, so if that gets merged, it would probably make sense to call the losssparsity_loss
since there's not technically any L1 loss in jumprelu.
Thanks! Let me know if there's anything else you need from me.
And I'm guessing unless you think we should keep the l1_coefficient
field for the other architecture(s), and add a sparsity_coefficient
that's just used for JumpReLU, your suggestion makes sense.
Trying a test-run now, but one more thing is that the typing for architecture
for LanguageModelSAERunnerConfig
needs to be updated as well - it's currently set to "standard" | "gated"
so throws a typing error when setting "jumprelu"
.
It seems like the general paradigm of most of the recent SAE architectures is having some sort of sparsity-inducing loss, either l1 in normal SAEs or l0 in jumprelu (or nothing for topk), and then an optional auxiliary loss. I'd be for just calling it sparsity_coefficient
to handle the architecture differences, but should be discussed in another follow-up issue probably.
I tried training for 1B tokens with sparsity coeff of 1e-3 (which I think is reasonable based on the gemma scope paper). Looks like L0 is still coming down by the end of training (Gemma Scope trains on 4B tokens), so I'm assuming this would plateau at a reasonable loss if I kept going.
LGTM! Great work with this!
Thanks!
Description
Adds logic to train JumpReLU SAEs. JumpReLU is a state-of-the-art architecture, so users should be able to train JumpReLU SAEs. Currently, they can load pre-trained JumpReLU SAEs and perform inference with them.
Fixes #330
Type of change
Please delete options that are not relevant.
Checklist:
You have tested formatting, typing and unit tests (acceptance tests not currently in use)
make check-ci
to check format and linting. (you can runmake format
to format code if needed.)Performance Check.
If you have implemented a training change, please indicate precisely how performance changes with respect to the following metrics:
Please links to wandb dashboards with a control and test group.