Wladastic / mini_autogpt

🤖 Mini-AutoGPT: A compact, Telegram-connected AI demonstrating the capabilities of local LLMs. Autonomous and experimental Proof of Concept
25 stars 6 forks source link

add json dumb extractor #4

Closed ketsapiwiq closed 2 months ago

ketsapiwiq commented 2 months ago

Often Llama3 will prefix the json with additional explanation text, or after providing it. Grammar constraint could solve this. But explanations preceding the JSON might help the model provide accurate responses (chain of thoughts).

ketsapiwiq commented 2 months ago

Just saw the same code...

Wladastic commented 2 months ago

Using Grammar is a good idea but I never got it to run. The llms often start generating wrong contents in the right format and make up new params that never existed. Did you get them to accept them?