Open DanielusG opened 1 year ago
If llama-cpp python bindings have an option to control the visibility of the processing, you can play with that. Shouldn't be that hard to add an .env variable to control those.
I'll take a PR (hint, hint :-p )
If llama-cpp python bindings have an option to control the visibility of the processing, you can play with that. Shouldn't be that hard to add an .env variable to control those.
yes, I tried to have a look by debugging during inference. However, I saw that the verbose variable was already set to true and at some point python made the call to the C library and debugging was no longer possible :(
Is there any update about this?
I am running babyAGI with llama.cpp, and being slower than other models, waiting a long time without seeing what it is doing under the AI. I can only see that my CPU is being used 100%.
I would like to see something like the write stream so I can see what it is writing and thinking about