Josh-XT / AGiXT

AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
https://AGiXT.com
MIT License
2.6k stars 346 forks source link

a debug log mechanism for prompts and a example log of a good run using the various commands. #190

Closed DIGist closed 1 year ago

DIGist commented 1 year ago

Problem Description

I'm trying to get AgentLLM to work with Vicuna and WizardLM through Oobabooga and it's very tough to debug without working examples of the exchanges and it would help enable development of other model support.

Proposed Solution

creation of a debug log button that logs/exports the prompts sent to the model and the models response. a default examples of the model using the various commands would help.

Alternatives Considered

No response

Additional Context

No response

Acknowledgements

Josh-XT commented 1 year ago

Problem Description

I'm trying to get AgentLLM to work with Vicuna and WizardLM through Oobabooga and it's very tough to debug without working examples of the exchanges and it would help enable development of other model support.

Proposed Solution

creation of a debug log button that logs/exports the prompts sent to the model and the models response. a default examples of the model using the various commands would help.

Alternatives Considered

No response

Additional Context

No response

Acknowledgements

  • [x] My issue title is concise, descriptive, and in title casing.
  • [x] I have searched the existing issues to make sure this feature has not been requested yet.
  • [x] I have provided enough information for the maintainers to understand and evaluate this request.

It is set up to work well with OpenAI, but will require some prompt engineering for the smaller models to work well. I had the huggingchat OpenAssistant almost working okay, but 2000 max tokens really hurts for this kind of thing.

Josh-XT commented 1 year ago

As far as a debug log, you can add a print to the AgentLLM.py file in the run function.

Right before the instruct happens, put this there:

print(f"PROMPT: {prompt}")