hudson-ai / minml

Remove the yammering from LLM outputs.
MIT License
2 stars 2 forks source link

Add support for dependent types. #3

Open notarealdeveloper opened 7 months ago

notarealdeveloper commented 7 months ago

Thanks to your help with #2, the gpts library now has a PR that adds a typed ask_for method using guidance and minml.

However, when me and @rskottap added tests to that method, we found it giving outputs that were never found in the (untyped) free response outputs.

Specifically, asking the models directly, without the guidance of guidance, we never got a negative value for the answer to any of the questions:

Here's what we get with the minml based methods:

negative-numbers

Any ideas on how to add a "positive" constraint to the int or float types, or more generally how to add other dependent types like "strings that match a regex"?

hudson-ai commented 7 months ago

Annotated types!

e.g.

from minml import gen_type
from typing import Annotated
from pydantic import StringConstraints

type = Annotated[str, StringConstraints(pattern=r'[A-Z]\d')]

Then use gen_type from there.

Note that

  1. I only implemented string constraints -- it's left as an exercise to the reader if you want constraints on floats in the short-term. In the long-term, I may implement that myself.
  2. guidance currently has very limited regex support, e.g. see the issue here: https://github.com/guidance-ai/guidance/issues/530
notarealdeveloper commented 7 months ago

Fantastic! We'll look into it and see what we can do.

hudson-ai commented 7 months ago

I find it somewhat concerning that you're seeing negative numbers in these contexts after adding guidance/minml to your workflow. Can you post a minimal example of a prompt that gives an erroneous negative? I would like to understand what's going on here :)