jbloomAus / SAELens

Training Sparse Autoencoders on Language Models
https://jbloomaus.github.io/SAELens/
MIT License
193 stars 67 forks source link

How to train SAEs on my own model? #192

Open likangk opened 1 week ago

likangk commented 1 week ago

Hi, Thank you for your excellent work on the SAE package. I am interested in training Sparse Autoencoders (SAEs) on my own model(LLM in Genomics), which is not included in TransformerLens. Could you please provide some guidance on how to get started with this? Any advice or suggestions would be greatly appreciated.

Thank you for your time and assistance.

Best regards,

jbloomAus commented 3 days ago

This currently isn't supported but could be supported soon. I think the key requirements are: