jbloomAus / SAELens

Training Sparse Autoencoders on Language Models
https://jbloomaus.github.io/SAELens/
MIT License
481 stars 127 forks source link

fix: use the correct layer for new gemma scope SAE sparsities #348

Closed hijohnnylin closed 1 month ago

hijohnnylin commented 1 month ago

Description

Fixes the incorrect layer 5 and uses the correct layer 12 for sparsity experiment instead.

Type of change

codecov[bot] commented 1 month ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 65.84%. Comparing base (ae345b6) to head (a78b93e). Report is 3 commits behind head on main.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #348 +/- ## ======================================= Coverage 65.84% 65.84% ======================================= Files 25 25 Lines 3288 3288 Branches 421 421 ======================================= Hits 2165 2165 Misses 1009 1009 Partials 114 114 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.