Open SrGonao opened 2 weeks ago
Currently, we do feature caching by keeping the activations in memory, before saving it (https://github.com/EleutherAI/sae-auto-interp/blob/v0.2/sae_auto_interp/features/cache.py#L208-L242). We could potentially keep saving it after X amount of tokens and then merge them at the end. This would allow for people to do longer runs where feature activations don't all fit in memory
okay great .will look into that .how can i test this approach
@SrGonao would love to work on this ,can you provide more details about it .