DavidUdell / sparse_circuit_discovery

Circuit discovery in GPT-2 small, using sparse autoencoding
MIT License
7 stars 1 forks source link

Allow activation collection at MLP-out and attention-out #3

Closed DavidUdell closed 10 months ago

DavidUdell commented 11 months ago

Currently, I collect activations from residual streams only. It shouldn't be that hard to take activations of the same shape from MLP-out and attention-out. This would give some insight into what MLP and attention sublayers are doing during model inference.

DavidUdell commented 10 months ago

Deprioritized for now.