yoheinakajima / babyagi

MIT License
19.9k stars 2.61k forks source link

Verbose mode #232

Open DanielusG opened 1 year ago

DanielusG commented 1 year ago

I am running babyAGI with llama.cpp, and being slower than other models, waiting a long time without seeing what it is doing under the AI. I can only see that my CPU is being used 100%.

I would like to see something like the write stream so I can see what it is writing and thinking about

francip commented 1 year ago

If llama-cpp python bindings have an option to control the visibility of the processing, you can play with that. Shouldn't be that hard to add an .env variable to control those.

francip commented 1 year ago

I'll take a PR (hint, hint :-p )

DanielusG commented 1 year ago

If llama-cpp python bindings have an option to control the visibility of the processing, you can play with that. Shouldn't be that hard to add an .env variable to control those.

yes, I tried to have a look by debugging during inference. However, I saw that the verbose variable was already set to true and at some point python made the call to the C library and debugging was no longer possible :(

DanielusG commented 1 year ago

Is there any update about this?