masci / banks

LLM prompt language based on Jinja. Banks provides tools and functions to build prompts text and chat messages from generic blueprints. It allows attaching metadata to prompts to ease their management, and versioning is first-class citizen. Banks provides ways to store prompts on disk along with their metadata.
MIT License
63 stars 5 forks source link

feat: Add function calling from `{% completion %}` #20

Closed masci closed 1 month ago

masci commented 1 month ago

Working example:

import json

from banks import Prompt

def get_current_weather(location: str):
    """Get the current weather in a given location"""
    if "tokyo" in location.lower():
        return json.dumps({"location": "Tokyo", "temperature": "10", "unit": "celsius"})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "72", "unit": "fahrenheit"})
    elif "paris" in location.lower():
        return json.dumps({"location": "Paris", "temperature": "22", "unit": "celsius"})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})

p = Prompt("""
{% set response %}
{% completion model="gpt-3.5-turbo-0125" %}
    {% chat role="user" %}
           What's the weather like in {{ query }}?
    {% endchat %}
    {{ get_current_weather | tool }}
{% endcompletion %}
{% endset %}

{# the variable 'response' contains the result #}

{{ response }}
""")

print(p.text({"query": "Paris", "get_current_weather": get_current_weather}))
coveralls commented 1 month ago

Coverage Status

coverage: 95.748% (+1.2%) from 94.541% when pulling 2cdbace275eb682c2031dae2773c63d1165b1827 on massi/function-calling into 05194bb309577884933f36f81532818edcae2dbf on main.