instructor-ai / instructor

structured outputs for llms
https://python.useinstructor.com/
MIT License
8.29k stars 660 forks source link
openai openai-function-calli openai-functions pydantic-v2 python validation

Instructor, The Most Popular Library for Simple Structured Outputs

Instructor is the most popular Python library for working with structured outputs from large language models (LLMs), boasting over 600,000 monthly downloads. Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows with the community's top choice!

Twitter Follow Discord Downloads

Want your logo on our website?

If your company uses Instructor a lot, we'd love to have your logo on our website! Please fill out this form

Key Features

Get Started in Minutes

Install Instructor with a single command:

pip install -U instructor

Now, let's see Instructor in action with a simple example:

import instructor
from pydantic import BaseModel
from openai import OpenAI

# Define your desired output structure
class UserInfo(BaseModel):
    name: str
    age: int

# Patch the OpenAI client
client = instructor.from_openai(OpenAI())

# Extract structured data from natural language
user_info = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=UserInfo,
    messages=[{"role": "user", "content": "John Doe is 30 years old."}],
)

print(user_info.name)
#> John Doe
print(user_info.age)
#> 30

Using Hooks

Instructor provides a powerful hooks system that allows you to intercept and log various stages of the LLM interaction process. Here's a simple example demonstrating how to use hooks:

import instructor
from openai import OpenAI
from pydantic import BaseModel

class UserInfo(BaseModel):
    name: str
    age: int

# Initialize the OpenAI client with Instructor
client = instructor.from_openai(OpenAI())

# Define hook functions
def log_kwargs(**kwargs):
    print(f"Function called with kwargs: {kwargs}")

def log_exception(exception: Exception):
    print(f"An exception occurred: {str(exception)}")

client.on("completion:kwargs", log_kwargs)
client.on("completion:error", log_exception)

user_info = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=UserInfo,
    messages=[
        {"role": "user", "content": "Extract the user name: 'John is 20 years old'"}
    ],
)

"""
{
        'args': (),
        'kwargs': {
            'messages': [
                {
                    'role': 'user',
                    'content': "Extract the user name: 'John is 20 years old'",
                }
            ],
            'model': 'gpt-4o-mini',
            'tools': [
                {
                    'type': 'function',
                    'function': {
                        'name': 'UserInfo',
                        'description': 'Correctly extracted `UserInfo` with all the required parameters with correct types',
                        'parameters': {
                            'properties': {
                                'name': {'title': 'Name', 'type': 'string'},
                                'age': {'title': 'Age', 'type': 'integer'},
                            },
                            'required': ['age', 'name'],
                            'type': 'object',
                        },
                    },
                }
            ],
            'tool_choice': {'type': 'function', 'function': {'name': 'UserInfo'}},
        },
    }
"""

print(f"Name: {user_info.name}, Age: {user_info.age}")
#> Name: John, Age: 20

This example demonstrates:

  1. A pre-execution hook that logs all kwargs passed to the function.
  2. An exception hook that logs any exceptions that occur during execution.

The hooks provide valuable insights into the function's inputs and any errors, enhancing debugging and monitoring capabilities.

Using Anthropic Models

import instructor
from anthropic import Anthropic
from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

client = instructor.from_anthropic(Anthropic())

# note that client.chat.completions.create will also work
resp = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    system="You are a world class AI that excels at extracting user data from a sentence",
    messages=[
        {
            "role": "user",
            "content": "Extract Jason is 25 years old.",
        }
    ],
    response_model=User,
)

assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25

Using Cohere Models

Make sure to install cohere and set your system environment variable with export CO_API_KEY=<YOUR_COHERE_API_KEY>.

pip install cohere
import instructor
import cohere
from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

client = instructor.from_cohere(cohere.Client())

# note that client.chat.completions.create will also work
resp = client.chat.completions.create(
    model="command-r-plus",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "Extract Jason is 25 years old.",
        }
    ],
    response_model=User,
)

assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25

Using Gemini Models

Make sure you install the Google AI Python SDK. You should set a GOOGLE_API_KEY environment variable with your API key. Gemini tool calling also requires jsonref to be installed.

pip install google-generativeai jsonref
import instructor
import google.generativeai as genai
from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

# genai.configure(api_key=os.environ["API_KEY"]) # alternative API key configuration
client = instructor.from_gemini(
    client=genai.GenerativeModel(
        model_name="models/gemini-1.5-flash-latest",  # model defaults to "gemini-pro"
    ),
    mode=instructor.Mode.GEMINI_JSON,
)

Alternatively, you can call Gemini from the OpenAI client. You'll have to setup gcloud, get setup on Vertex AI, and install the Google Auth library.

pip install google-auth
import google.auth
import google.auth.transport.requests
import instructor
from openai import OpenAI
from pydantic import BaseModel

creds, project = google.auth.default()
auth_req = google.auth.transport.requests.Request()
creds.refresh(auth_req)

# Pass the Vertex endpoint and authentication to the OpenAI SDK
PROJECT = 'PROJECT_ID'
LOCATION = (
    'LOCATION'  # https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations
)
base_url = f'https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT}/locations/{LOCATION}/endpoints/openapi'

client = instructor.from_openai(
    OpenAI(base_url=base_url, api_key=creds.token), mode=instructor.Mode.JSON
)

# JSON mode is req'd
class User(BaseModel):
    name: str
    age: int

resp = client.chat.completions.create(
    model="google/gemini-1.5-flash-001",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "Extract Jason is 25 years old.",
        }
    ],
    response_model=User,
)

assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25

Using Litellm

import instructor
from litellm import completion
from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

client = instructor.from_litellm(completion)

resp = client.chat.completions.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "Extract Jason is 25 years old.",
        }
    ],
    response_model=User,
)

assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25

Types are inferred correctly

This was the dream of Instructor but due to the patching of OpenAI, it wasn't possible for me to get typing to work well. Now, with the new client, we can get typing to work well! We've also added a few create_* methods to make it easier to create iterables and partials, and to access the original completion.

Calling create

import openai
import instructor
from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

client = instructor.from_openai(openai.OpenAI())

user = client.chat.completions.create(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create a user"},
    ],
    response_model=User,
)

Now if you use an IDE, you can see the type is correctly inferred.

type

Handling async: await create

This will also work correctly with asynchronous clients.

import openai
import instructor
from pydantic import BaseModel

client = instructor.from_openai(openai.AsyncOpenAI())

class User(BaseModel):
    name: str
    age: int

async def extract():
    return await client.chat.completions.create(
        model="gpt-4-turbo-preview",
        messages=[
            {"role": "user", "content": "Create a user"},
        ],
        response_model=User,
    )

Notice that simply because we return the create method, the extract() function will return the correct user type.

async

Returning the original completion: create_with_completion

You can also return the original completion object

import openai
import instructor
from pydantic import BaseModel

client = instructor.from_openai(openai.OpenAI())

class User(BaseModel):
    name: str
    age: int

user, completion = client.chat.completions.create_with_completion(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create a user"},
    ],
    response_model=User,
)

with_completion

Streaming Partial Objects: create_partial

In order to handle streams, we still support Iterable[T] and Partial[T] but to simplify the type inference, we've added create_iterable and create_partial methods as well!

import openai
import instructor
from pydantic import BaseModel

client = instructor.from_openai(openai.OpenAI())

class User(BaseModel):
    name: str
    age: int

user_stream = client.chat.completions.create_partial(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create a user"},
    ],
    response_model=User,
)

for user in user_stream:
    print(user)
    #> name=None age=None
    #> name=None age=None
    #> name=None age=None
    #> name=None age=None
    #> name=None age=None
    #> name=None age=None
    #> name='John Doe' age=None
    #> name='John Doe' age=None
    #> name='John Doe' age=None
    #> name='John Doe' age=30
    #> name='John Doe' age=30
    # name=None age=None
    # name='' age=None
    # name='John' age=None
    # name='John Doe' age=None
    # name='John Doe' age=30

Notice now that the type inferred is Generator[User, None]

generator

Streaming Iterables: create_iterable

We get an iterable of objects when we want to extract multiple objects.

import openai
import instructor
from pydantic import BaseModel

client = instructor.from_openai(openai.OpenAI())

class User(BaseModel):
    name: str
    age: int

users = client.chat.completions.create_iterable(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create 2 users"},
    ],
    response_model=User,
)

for user in users:
    print(user)
    #> name='John Doe' age=30
    #> name='Jane Doe' age=28
    # User(name='John Doe', age=30)
    # User(name='Jane Smith', age=25)

iterable

Evals

We invite you to contribute to evals in pytest as a way to monitor the quality of the OpenAI models and the instructor library. To get started check out the evals for Anthropic and OpenAI and contribute your own evals in the form of pytest tests. These evals will be run once a week and the results will be posted.

Contributing

If you want to help, checkout some of the issues marked as good-first-issue or help-wanted found here. They could be anything from code improvements, a guest blog post, or a new cookbook.

CLI

We also provide some added CLI functionality for easy convenience:

License

This project is licensed under the terms of the MIT License.

Contributors