Closed garyfeng closed 1 year ago
Let's follow up memory.add()
. This is in the folder memory
, and we look at local.py
. It seems that we take the memory_to_add
as a string text
, and get the embedding of text
by get_ada_embedding(text)
. This embedding vector gets concatenated with previous embedding vectors.
def add(self, text: str):
"""
Add text to our list of texts, add embedding as row to our
embeddings-matrix
Args:
text: str
Returns: None
"""
if 'Command Error:' in text:
return ""
self.data.texts.append(text)
embedding = get_ada_embedding(text)
vector = np.array(embedding).astype(np.float32)
vector = vector[np.newaxis, :]
self.data.embeddings = np.concatenate(
[
self.data.embeddings,
vector,
],
axis=0,
)
with open(self.filename, 'wb') as f:
out = orjson.dumps(
self.data,
option=SAVE_OPTIONS
)
f.write(out)
return text
The same function for pinecone.py
is even simpler:
def add(self, data):
vector = get_ada_embedding(data)
# no metadata here. We may wish to change that long term.
resp = self.index.upsert([(str(self.vec_num), vector, {"raw_text": data})])
_text = f"Inserting data into memory at index: {self.vec_num}:\n data: {data}"
self.vec_num += 1
return _text
You can use human feedback
to basically chat with GPT. For example, in one of the tasks where I asked GPT to do research and write a business plan, it continued to 'research' endlessly. When it stopped for human feedback
, I said "great work. Now write me the business plan". And it did. Then you can say "revise the business plan to provide more details", etc.
AutoGPT
by default stops after each step for human input. You can simply typey
to go on; internally they
response is translated intoBut if you actually type something, your response is incorporated into the
memory
:Of course, if your input is
EXIT
, you kill the agent.Question is, what does
AutoGPT
do with your 'human feedback'? It gets added to the memory:Then what? Does
AutoGPT
then sendmemory_to_add
to GPT as context for the next action?