pgalko / BambooAI

A lightweight library that leverages Language Models (LLMs) to enable natural language interactions, allowing you to source and converse with data.
MIT License
437 stars 47 forks source link

No feeback for the user while correcting code #12

Open pnmartinez opened 4 months ago

pnmartinez commented 4 months ago

Problem

When code corrections are triggered, the user is left waiting without any feedback on CLI about current status of the process (image below).

Solution

Output from the LLMs while correcting the code in between corrections would prevent the user from thinking the process has halted or crashed (green region in image below).

This is particularly troublesome doing inference on slow setups (such as local LLMs on laptops, like Llama3 8b).

imagen

pgalko commented 4 months ago

Good point. What it does during that gap is developing a new version of the code, incorporating the fix. We can easily enable a stream to terminal by just changing the line 510 in bambooai.py module to llm_response = self.llm_stream(self.log_and_call_manager,code_messages,agent=agent, chain_id=self.chain_id) , but it will make the terminal window really busy/clattered. I will try to think of something.