NarsGPT lacks some features of gptONA currently which would also be compatible with it:
as in gptONA, us sentence embedding to judge which information is relevant to the question to pull from long-term memory
do not view items of high use count in addition for forward inference, only recent and relevant items. As in ONA usefulness should be a deciding factor in forgetting (if LTM storage is configured to be limited, say to 1 million items), not in attention.
once these 2 aspects are implemented, copy-paste the evaluation suite over from gptONA and check how it compares against it.
NarsGPT lacks some features of gptONA currently which would also be compatible with it: