Closed chanind closed 2 weeks ago
Attention: Patch coverage is 90.90909%
with 2 lines
in your changes missing coverage. Please review.
Project coverage is 66.26%. Comparing base (
8e09458
) to head (beb2193
). Report is 2 commits behind head on main.
Files with missing lines | Patch % | Lines |
---|---|---|
sae_lens/training/sae_trainer.py | 80.00% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
discussed / approved offline
Description
This PR refactors the
TrainStepOutput
inTrainingSAE.training_forward_pass()
to allow passinglosses
as a dict rather than a hardcoded set of specific losses. The current implementation requires always providing values for ghost grad loss and auxiliary loss, even when it doesn't make sense for the training method. As SAELens supports more architectures, there will likely be more losses like this that only make sense for certain architectures, so it makes sense to let the train forward step define the losses that make sense for the circumstances.This should also make it easier for researchers to hack on the SAELens SAEs since it means they can define any losses they want to track.
This PR also allows the losses to remain as tensors rather than being floats. The current implementation requires calling
.item()
on each loss, which forces a synchronization with the GPU and likely slows down training slightly. We only ever use the losses for logging, which happens by default once every 100 steps, so this synchronization is unnecessary.Type of change
Please delete options that are not relevant.
Checklist:
You have tested formatting, typing and unit tests (acceptance tests not currently in use)
make check-ci
to check format and linting. (you can runmake format
to format code if needed.)