The goal of chatlas is to provide a user friendly wrapper around the APIs for large lanuage model (LLM) providers.
chatlas
is intentionally minimal -- making it easy to get started, while also supporting table stakes features like streaming output, structured data extraction, function (tool) calling, images, async, and more.
(Looking for something similar to chatlas, but in R? Check out elmer!)
chatlas
isn't yet on pypi, but you can install from Github:
pip install git+https://github.com/posit-dev/chatlas
chatlas
supports a variety of model providers. See the API reference for more details (like managing credentials) on each provider.
ChatAnthropic()
.ChatGithub()
.ChatGoogle()
.ChatGroq()
.ChatOllama()
.ChatOpenAI()
.ChatPerplexity()
.It also supports the following enterprise cloud providers:
ChatBedrockAnthropic()
.ChatAzureOpenAI()
.If you're using chatlas inside your organisation, you'll be limited to what your org allows, which is likely to be one provided by a big cloud provider (e.g. ChatAzureOpenAI()
and ChatBedrockAnthropic()
). If you're using chatlas for your own personal exploration, you have a lot more freedom, so we recommend starting with one of the following:
I'd recommend starting with either ChatOpenAI()
or ChatAnthropic()
. ChatOpenAI()
defaults to GPT-4o-mini, which is good and relatively cheap. You might want to try model = "gpt-4o"
for more demanding tasks, or model = "o1-mini"
if you want to force complex reasoning. ChatAnthropic()
is similarly good and well priced. It defaults to Claude 3.5 Sonnet which we have found to the be the best for writing code.
Try ChatGoogle()
if you want to put a lot of data in the prompt. This provider defaults to the Gemini 1.5 Flash model which supports 1 million tokens, compared to 200k for Claude 3.5 Sonnet and 128k for GPT 4o mini.
Use Ollama with ChatOllama()
to run models on your own computer. The biggest models you can run locally aren't as good as the state of the art hosted models, but they also don't share your data and and are effectively free.
You can chat via chatlas
in several different ways, depending on whether you are working interactively or programmatically. They all start with creating a new chat object:
from chatlas import ChatOpenAI
chat = ChatOpenAI(
model = "gpt-4o-mini",
system_prompt = "You are a friendly but terse assistant.",
)
Chat objects are stateful: they retain the context of the conversation, so each new query can build on the previous ones. This is true regardless of which of the various ways of chatting you use.
From a chat
instance, you can start an interacitve, multi-turn, conversation in the console (via .console()
) or in a browser (via .app()
).
chat.console()
Entering chat console. Press Ctrl+C to quit.
?> Who created Python?
Python was created by Guido van Rossum. He began development in the late 1980s and released the first
version in 1991.
?> Where did he develop it?
Guido van Rossum developed Python while working at Centrum Wiskunde & Informatica (CWI) in the
Netherlands.
The chat console is useful for quickly exploring the capabilities of the model, especially when you've customized the chat object with tool integrations (covered later).
The chat app is similar to the chat console, but it runs in your browser. It's useful if you need more interactive capabilities like easy copy-paste.
chat.app()
Again, keep in mind that the chat object retains state, so when you enter the chat console, any previous interactions with that chat object are still part of the conversation, and any interactions you have in the chat console will persist even after you exit back to the Python prompt.
.chat()
methodFor a more programmatic approach, you can use the .chat()
method to ask a question and get a response. By default, the response prints to a rich console as it streams in:
chat.chat("What preceding languages most influenced Python?")
Python was primarily influenced by ABC, with additional inspiration from C,
Modula-3, and various other languages.
To get the full response as a string, use the built-in str()
function. Optionally, you can also suppress the rich console output by setting echo="none"
:
response = chat.chat("Who is Posit?", echo="none")
print(str(response))
As we'll cover in later articles, echo="all"
can also be useful for debugging, as it shows additional information, such as tool calls.
.stream()
methodIf you want to do something with the response in real-time (i.e., as it arrives in chunks), use the .stream()
method. This method returns an iterator that yields each chunk of the response as it arrives:
response = chat.stream("Who is Posit?")
for chunk in response:
print(chunk, end="")
The .stream()
method can also be useful if you're building a chatbot or other interactive applications that needs to display responses as they arrive.
Ask questions about image(s) with content_image_file()
and/or content_image_url()
:
from chatlas import content_image_url
chat.chat(
content_image_url("https://www.python.org/static/img/python-logo.png"),
"Can you explain this logo?"
)
The Python logo features two intertwined snakes in yellow and blue,
representing the Python programming language. The design symbolizes...
The content_image_url()
function takes a URL to an image file and sends that URL directly to the API. The content_image_file()
function takes a path to a local image file and encodes it as a base64 string to send to the API. Note that by default, content_image_file()
automatically resizes the image to fit within 512x512 pixels; set the resize
parameter to "high" if higher resolution is needed.
Remember that regardless of how we interact with the model, the chat
instance retains the conversation history, which you can access at any time:
chat.turns()
Each turn represents a either a user's input or a model's response. It holds all the avaliable information about content and metadata of the turn. This can be useful for debugging, logging, or for building more complex conversational interfaces.
For cost and efficiency reasons, you may want to alter the conversation history. Currently, the main way to do this is to .set_turns()
:
# Remove all but the last two turns
chat.set_turns(chat.turns()[-2:])
If you're new to world LLMs, you might want to read the Get Started guide, which covers some basic concepts and terminology.
Once you're comfortable with the basics, you can explore more advanced topics:
The API reference is also a useful overview of all the tooling available in chatlas
, including starting examples and detailed descriptions.