issues
search
DavidUdell
/
sparse_circuit_discovery
Circuit discovery in GPT-2 small, using sparse autoencoding
MIT License
6
stars
1
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
`interp_tools/utils/graphs` should stdout "nodes dropped {}; fraction of acts diff explained {}"
#20
DavidUdell
closed
8 months ago
0
Batching in `feature_web_webtext`.
#19
DavidUdell
closed
8 months ago
1
Through-looping building on prior graphed nodes, not on independently set nodes.
#18
DavidUdell
closed
8 months ago
5
Sweeps over hyperparameters, once subnetwork probing metrics are up.
#17
DavidUdell
closed
8 months ago
2
Implement Hoyer-Square along with sqrt L^0.5 regularization
#16
DavidUdell
closed
8 months ago
1
Feature/datasets
#15
DavidUdell
closed
9 months ago
0
Feature/web
#13
DavidUdell
closed
9 months ago
0
Semantically meaningful graphs?
#12
DavidUdell
closed
7 months ago
3
Full-scale `feature_web` uses the _ablation layer_ encoder and biases for the _downstream layer_ caching
#11
DavidUdell
closed
9 months ago
0
Loading bars at all the inference bottlenecks
#10
DavidUdell
closed
9 months ago
0
Support subnetwork probing
#9
DavidUdell
closed
8 months ago
3
Validation for top-k input token labels
#8
DavidUdell
closed
9 months ago
1
Have `feature_web` use the cached neuron labels
#7
DavidUdell
closed
9 months ago
0
Add `feature_web` support for arbitrary HF models
#6
DavidUdell
closed
9 months ago
4
Fix colon-only slices from `central_config.yaml`
#5
DavidUdell
closed
9 months ago
0
Refactor `modal_tensor_acceleration`
#4
DavidUdell
closed
9 months ago
0
Allow activation collection at MLP-out and attention-out
#3
DavidUdell
closed
8 months ago
1
Deal with autoencoder training "dead neurons"
#2
DavidUdell
closed
8 months ago
2
Support activation collection from The Pile
#1
DavidUdell
closed
9 months ago
1
Previous