Model forward passes are more compute intensive than SAEs. Although I expect collecting activations to be a bigger bottleneck than evals, this change seems like low hanging fruit unless I'm missing something.
Checklist
[x ] I have checked that there is no similar issue in the repo (required)
Proposal
I think you can stop_at_layer during this run_with_cache call for a free speed up: https://github.com/jbloomAus/SAELens/blob/2c1cbc4d0a6c446bf62ac9f84760e3f041bc021e/sae_lens/evals.py#L216
See activation_store for reference: https://github.com/jbloomAus/SAELens/blob/main/sae_lens/training/activations_store.py#L430-L437
Motivation
Model forward passes are more compute intensive than SAEs. Although I expect collecting activations to be a bigger bottleneck than evals, this change seems like low hanging fruit unless I'm missing something.
Checklist