fafrd / aquarium

AI-controlled Linux Containers
GNU General Public License v3.0
662 stars 38 forks source link

Reduce API cost #6

Closed fafrd closed 1 year ago

fafrd commented 1 year ago

Running this program can be expensive due to the OpenAI api cost. Part of the reason is that we are sending the entire output of previous commands, and for very long messages this can add up.

We can probably fix this by only sending the last ~10 or 20 output lines to OpenAI- we should implement that as a flag or something. If it works very well i'll make this the default behavior.

fafrd commented 1 year ago

Implemented in 7e124015da83aba78bc752a05e058023d2c983c3: Add context-mode arg to reduce API usage

The current behavior has been to send the ENTIRE output of the previous command back to OpenAI. If it was too long, we would chunk it up and ask OpenAI to process it in chunks, then get a summary of all the chunks...

turns out that logic is overcomplicating the matter. We can just return the last 10 lines, and that seems to work well.

This is controlled by a new argument --context-mode, which must be one of "full" (current summarization behavior) or "partial" (new truncated behavior.) Partial is the new default.

This should significantly reduce the cost of using this tool... long apt-get outputs are accurately understood with only ~10% of the cost.