EleutherAI / sae-auto-interp

https://blog.eleuther.ai/autointerp/
Apache License 2.0
112 stars 12 forks source link

Save cached latents as caching progresses #38

Open SrGonao opened 2 weeks ago

kernel-loophole commented 1 week ago

@SrGonao would love to work on this ,can you provide more details about it .

SrGonao commented 1 week ago

Currently, we do feature caching by keeping the activations in memory, before saving it (https://github.com/EleutherAI/sae-auto-interp/blob/v0.2/sae_auto_interp/features/cache.py#L208-L242). We could potentially keep saving it after X amount of tokens and then merge them at the end. This would allow for people to do longer runs where feature activations don't all fit in memory

kernel-loophole commented 1 week ago

okay great .will look into that .how can i test this approach