Closed Pwuts closed 11 months ago
I've read into it and am more than happy to get started on it. Where do you see it, as of now, would it have the most benefit to the project?
@AbTrax by restructuring the prompts using LMQL we might be able to greatly reduce the number of JSON errors.
Most if not all of the needed changes can be done in autogpt/prompts/
. Start there, try to minimize changes to files outside that directory, and ping us on Discord when you have an update :)
Question is, can this also be used to identify true relevant data points that can be used as memory mets data
Hello, is the current architecture implementing LMQL ?
@khongminhtn no, see the description under Current Status
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
Current status
While we consider LMQL to be a powerful tool to supercharge interaction with text-only LLMs, it currently doesn't suit all needs of the project.
The landscape of LLMs is rapidly evolving, with OpenAI announcing only yesterday that their GPT-3.5 and GPT-4 models are being enhanced with a function calling interface. This feature has the potential to solve multiple current issues in Auto-GPT. LMQL doesn't currently have support for this feature, and adding support may be a challenge as other LLMs do not provide a similar interface.
We need high observability of interaction with the LLM, which LMQL currently does not offer.
Resources