Currently, ITI paper repo leverages on folding activation patching into the weights to produce a honest llama-2 chat. This is not effective as the base LM doubles the disk.
The alternative (as provided by their repo) is to use BauKit to load activations ad-hoc and patch them into the run. The intervention has to be extracted as code by the users.
We take a different approach: the thin activation intervention is saved as raw vectors onto huggingface, and can be loaded with pyvene and construct an intervenable LLaMA.
Descriptions:
Currently, ITI paper repo leverages on folding activation patching into the weights to produce a honest llama-2 chat. This is not effective as the base LM doubles the disk.
The alternative (as provided by their repo) is to use BauKit to load activations ad-hoc and patch them into the run. The intervention has to be extracted as code by the users.
We take a different approach: the thin activation intervention is saved as raw vectors onto huggingface, and can be loaded with pyvene and construct an intervenable LLaMA.