TransformerOptimus / SuperAGI

<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
https://superagi.com/
MIT License
15.27k stars 1.84k forks source link

Should LMQL/Microsoft Guidance be used in the prompts architecture of SuperAGI and part of the repository? #58

Open ResoluteStoic opened 1 year ago

ResoluteStoic commented 1 year ago

The potential benefits are listed below:

  1. As syntactic norms are followed or that certain words or phrases are avoided, restructuring prompts to LLMs using LMQL should greatly reduce the JSON errors that popup (will be even more relevant as the capabilities are added)
  2. LMQL maintains or improves the accuracy of various downstream tasks while substantially reducing computation or cost in pay-to-use APIs, resulting in cost savings >10%

Introductory research paper: https://arxiv.org/pdf/2212.06094.pdf Website: https://lmql.ai/ Docs: https://docs.lmql.ai/ Example: https://github.com/rumpfmax/Multi-GPT/tree/master/multigpt/lmql_utils

Cptsnowcrasher commented 1 year ago

Currently it is not in the roadmap, but can you give specific use cases where it can help

Cptsnowcrasher commented 1 year ago

Challenge with these standards is that they are in flux currently as technical contours are getting defined, so don't want to add it as a core library feature till they mature

SlistInc commented 1 year ago

Given some time has passed I would also highly recommend LMQL which is actively developed and quite mature.

Use case / reason for these libraries In simple terms, rather than getting a long unstructured text from an LLM, these libraries basically allow an LLM to only "fill out" elements of a given template and based on rules. On a practical level this would allow them to for instance output proper JSON files, even when the underlying LLM is not that smart. A particularly helpful feature I personally use often in LMQL is the ability to constrain a specific LLM output field to a set of given options (which would immensely help tool selection).

TLDR: These libraries allow for much more stable and predictable LLM outputs, even when the used model is not that smart. This is great for using small local models.