Open chris-aeviator opened 6 months ago
@chris-aeviator thanks for the question. could you elaborate more on the memorization example? the current setup is to train an intervention to memorize the whole sequence. Could you explain the part of "attending"? Thanks!
Let's see the memorized sequence as a 'llm memory' for a second. If one could selectively write to (and query) these memorized sequences and attend to them it's possible to build a long term memory. The mentioned projects more or less do a top_k nn search for keys to do that query, but this is not part of my issue, I'm looking for controllable query/ writes to 'memory'.
Using a loreft intervention allows me to attend to the trained material but it seems no suited to train a single piece of text (I might might be wrong).
What I mean with attending to the memory: There is no way to interact with a 1d memorized sequence - I can not mix user query with the memorized sequence query, it's either looking up the memorized sequence or not having 'acces to it' (when mixed with a user query).
When memorizing a sequence (1D intervention) is it possible to attend to it, as in 'where is GO-> located' (Stanford).?
I'd be interested in using pyreft for 'online-learning' similar to approaches with associative memory proposed in Larimar/MemoryLLM/CameLoT/Memory of Amortized Contexts. These projects lack implementations or usable interfaces and possiblities to transfer/ load learned behavior that pyreft comes with.
As an alternative would I train and load (hundreds of) partitioned SubLorefts to achieve the same?