Closed adamkarvonen closed 5 months ago
Hey Adam,
Thanks for raising this. We're moving pretty quickly so it's easy for things to get out of sync. We did catch this and will be implementing integration tests between SAE Vis and SAE Lens shortly.
If you update sae_vis
this should be fixed. (https://github.com/callummcdougall/sae_vis/blob/d759ef0237089e72cc9cad7edc4eceb4e8cfdd00/sae_vis/data_storing_fns.py#L1026)
A bit more detail: SAE Vis currently makes strong assumptions about the forward pass of the Autoencoder which won't in general be true (eg: pre-encoder subtraction of the decoder bias, relu activation etc). So I think we'll need to find a solution for this (likely creating some spec which allows SAE Vis to black box the Autoencoder).
Describe the bug I'm using this notebook on an SAE I created:
basic_loading_and_analysing.ipynb
. I get an error that appears to be because thescaling_factor
was added to the SparseAutoencoder class, which sae_vis is not expecting.Code example
When running this cell:
This print statement shows a
scaling_factor
key that isn't in the above assert:System Info Describe the characteristic of your environment:
transformer_lens
was installed (pip, docker, source, ...) pipChecklist