jbloomAus / SAELens

Training Sparse Autoencoders on Language Models
https://jbloomaus.github.io/SAELens/
MIT License
481 stars 127 forks source link

Add logic to train JumpReLU SAEs #352

Closed anthonyduong9 closed 2 weeks ago

anthonyduong9 commented 3 weeks ago

Description

Adds logic to train JumpReLU SAEs. JumpReLU is a state-of-the-art architecture, so users should be able to train JumpReLU SAEs. Currently, they can load pre-trained JumpReLU SAEs and perform inference with them.

Fixes #330

Type of change

Please delete options that are not relevant.

Checklist:

You have tested formatting, typing and unit tests (acceptance tests not currently in use)

Performance Check.

If you have implemented a training change, please indicate precisely how performance changes with respect to the following metrics:

Please links to wandb dashboards with a control and test group.

anthonyduong9 commented 2 weeks ago

Some minor comments but core logic looks great! I'll try training a JumpReLU SAE to make sure this works.

It feels odd that the l1_coefficient is used for the JumpReLU sparsity loss coefficient, and that it gets logged as l1_loss. Maybe in a follow-up PR we should rename l1_coefficient to sparsity_coefficient instead? I added a PR #357 which lets the forward pass log any losses it wants, so if that gets merged, it would probably make sense to call the loss sparsity_loss since there's not technically any L1 loss in jumprelu.

Thanks! Let me know if there's anything else you need from me.

And I'm guessing unless you think we should keep the l1_coefficient field for the other architecture(s), and add a sparsity_coefficient that's just used for JumpReLU, your suggestion makes sense.

chanind commented 2 weeks ago

Trying a test-run now, but one more thing is that the typing for architecture for LanguageModelSAERunnerConfig needs to be updated as well - it's currently set to "standard" | "gated" so throws a typing error when setting "jumprelu".

chanind commented 2 weeks ago

It seems like the general paradigm of most of the recent SAE architectures is having some sort of sparsity-inducing loss, either l1 in normal SAEs or l0 in jumprelu (or nothing for topk), and then an optional auxiliary loss. I'd be for just calling it sparsity_coefficient to handle the architecture differences, but should be discussed in another follow-up issue probably.

chanind commented 2 weeks ago
Screenshot 2024-11-03 at 21 11 48

I tried training for 1B tokens with sparsity coeff of 1e-3 (which I think is reasonable based on the gemma scope paper). Looks like L0 is still coming down by the end of training (Gemma Scope trains on 4B tokens), so I'm assuming this would plateau at a reasonable loss if I kept going.

anthonyduong9 commented 2 weeks ago

LGTM! Great work with this!

Thanks!