Closed alexl83 closed 1 month ago
This error is caused by a missing OpenAI configuration code.
Did you manage to see the results of the tasks?
Nope, it seems that due to LLaMA bugs, the prioritization agents drops babyagi out of the loop - hope it gets sorted soon so I can experiment :)
Are there anyway to extract even the prompt flow from an instance. The content that was displayed in the prompt, anyway to see that after a session is closed?
Is there any progress with this? Seeing "Done" is nice and well, but I'd really like to know what its done.
any way to get results out of pinecone? completely new to vector DBs.
Indeed seeing results and direction of tasks that it, proposes, accomplishes, and completes is needed to the point of seeming obvious. I have it perform 'heuristics' and apply its own rules of thumb, to find the best change for solving problems without understanding them, using available resources. As it undergoes this, what it discovers, generates, and proposes reveals a data horde of treasures needed for further exploration and understanding.
@xnoubis @letsgoduke @vmajor @elpreneurAbdo on loinux you can launch it with a quick and dirty fix like : ./babyagi.py | tee -a ./dump.txt
to create a dump file that catches everything it outputs, after every completed step. i created ~20K line-long dump file today creating story snippets to use as LLM training/finetunig data. it never managed to write me an entire book, but it did write me hundreds of awesome scenes xD
Hi! I got a task supposedly completed by babyagi using llama llm is there a way I can visualize tasks results? I tried tools/results.py and tools/results_browser.py - all to no avai
error 1 (fixed by declaring the variable TABLE_NAME in .env):
error 2:
Thank you! Ale