posit-dev / chatlas

https://posit-dev.github.io/chatlas/
9 stars 0 forks source link
chatbot llm python

chatlas

The goal of chatlas is to provide a user friendly wrapper around the APIs for large lanuage model (LLM) providers. chatlas is intentionally minimal -- making it easy to get started, while also supporting table stakes features like streaming output, structured data extraction, function (tool) calling, images, async, and more.

(Looking for something similar to chatlas, but in R? Check out elmer!)

Install

chatlas isn't yet on pypi, but you can install from Github:

pip install git+https://github.com/posit-dev/chatlas

Model providers

chatlas supports a variety of model providers. See the API reference for more details (like managing credentials) on each provider.

It also supports the following enterprise cloud providers:

Model choice

If you're using chatlas inside your organisation, you'll be limited to what your org allows, which is likely to be one provided by a big cloud provider (e.g. ChatAzureOpenAI() and ChatBedrockAnthropic()). If you're using chatlas for your own personal exploration, you have a lot more freedom, so we recommend starting with one of the following:

Using chatlas

You can chat via chatlas in several different ways, depending on whether you are working interactively or programmatically. They all start with creating a new chat object:

from chatlas import ChatOpenAI

chat = ChatOpenAI(
  model = "gpt-4o-mini",
  system_prompt = "You are a friendly but terse assistant.",
)

Chat objects are stateful: they retain the context of the conversation, so each new query can build on the previous ones. This is true regardless of which of the various ways of chatting you use.

Interactive console

From a chat instance, you can start an interacitve, multi-turn, conversation in the console (via .console()) or in a browser (via .app()).

chat.console()
Entering chat console. Press Ctrl+C to quit.

?> Who created Python?

Python was created by Guido van Rossum. He began development in the late 1980s and released the first     
version in 1991. 

?> Where did he develop it?

Guido van Rossum developed Python while working at Centrum Wiskunde & Informatica (CWI) in the            
Netherlands.     

The chat console is useful for quickly exploring the capabilities of the model, especially when you've customized the chat object with tool integrations (covered later).

The chat app is similar to the chat console, but it runs in your browser. It's useful if you need more interactive capabilities like easy copy-paste.

chat.app()
A web app for chatting with an LLM via chatlas

Again, keep in mind that the chat object retains state, so when you enter the chat console, any previous interactions with that chat object are still part of the conversation, and any interactions you have in the chat console will persist even after you exit back to the Python prompt.

The .chat() method

For a more programmatic approach, you can use the .chat() method to ask a question and get a response. By default, the response prints to a rich console as it streams in:

chat.chat("What preceding languages most influenced Python?")
Python was primarily influenced by ABC, with additional inspiration from C,
Modula-3, and various other languages.

To get the full response as a string, use the built-in str() function. Optionally, you can also suppress the rich console output by setting echo="none":

response = chat.chat("Who is Posit?", echo="none")
print(str(response))

As we'll cover in later articles, echo="all" can also be useful for debugging, as it shows additional information, such as tool calls.

The .stream() method

If you want to do something with the response in real-time (i.e., as it arrives in chunks), use the .stream() method. This method returns an iterator that yields each chunk of the response as it arrives:

response = chat.stream("Who is Posit?")
for chunk in response:
    print(chunk, end="")

The .stream() method can also be useful if you're building a chatbot or other interactive applications that needs to display responses as they arrive.

Vision (Image Input)

Ask questions about image(s) with content_image_file() and/or content_image_url():

from chatlas import content_image_url

chat.chat(
    content_image_url("https://www.python.org/static/img/python-logo.png"),
    "Can you explain this logo?"
)
The Python logo features two intertwined snakes in yellow and blue,
representing the Python programming language. The design symbolizes...

The content_image_url() function takes a URL to an image file and sends that URL directly to the API. The content_image_file() function takes a path to a local image file and encodes it as a base64 string to send to the API. Note that by default, content_image_file() automatically resizes the image to fit within 512x512 pixels; set the resize parameter to "high" if higher resolution is needed.

Conversation history

Remember that regardless of how we interact with the model, the chat instance retains the conversation history, which you can access at any time:

chat.turns()

Each turn represents a either a user's input or a model's response. It holds all the avaliable information about content and metadata of the turn. This can be useful for debugging, logging, or for building more complex conversational interfaces.

For cost and efficiency reasons, you may want to alter the conversation history. Currently, the main way to do this is to .set_turns():

# Remove all but the last two turns
chat.set_turns(chat.turns()[-2:])

Learn more

If you're new to world LLMs, you might want to read the Get Started guide, which covers some basic concepts and terminology.

Once you're comfortable with the basics, you can explore more advanced topics:

The API reference is also a useful overview of all the tooling available in chatlas, including starting examples and detailed descriptions.