This repository provides a blueprint and full toolkit for a LangGraph-based agent service architecture. It includes a LangGraph agent, a FastAPI service to serve it, a client to interact with the service, and a Streamlit app that uses the client to provide a chat interface.
This project offers a template for you to easily build and run your own agents using the LangGraph framework. It demonstrates a complete setup from agent definition to user interface, making it easier to get started with LangGraph-based projects by providing a full, robust toolkit.
🎥 Watch a video walkthrough of the repo and app
Run directly in python
# At least one LLM API key is required
echo 'OPENAI_API_KEY=your_openai_api_key' >> .env
# uv is recommended but "pip install ." also works
pip install uv
uv sync --frozen
# "uv sync" creates .venv automatically
source .venv/bin/activate
python src/run_service.py
# In another shell
source .venv/bin/activate
streamlit run src/streamlit_app.py
Run with docker
echo 'OPENAI_API_KEY=your_openai_api_key' >> .env
docker compose watch
The repository is structured as follows:
src/agents/research_assistant.py
: Defines the main LangGraph agentsrc/agents/llama_guard.py
: Defines the LlamaGuard content moderationsrc/agents/models.py
: Configures available models based on ENVsrc/agents/agents.py
: Mapping of all agents provided by the servicesrc/schema/schema.py
: Defines the protocol schemasrc/service/service.py
: FastAPI service to serve the agentssrc/client/client.py
: Client to interact with the agent servicesrc/streamlit_app.py
: Streamlit app providing a chat interfaceAI agents are increasingly being built with more explicitly structured and tightly controlled Compound AI Systems, with careful attention to the cognitive architecture. At the time of this repo's creation, LangGraph seems like the most advanced open source framework for building such systems, with a high degree of control as well as support for features like concurrent execution, cycles in the graph, streaming results, built-in observability, and the rich ecosystem around LangChain.
I've spent a decent amount of time building with LangChain over the past year and experienced some of the commonly cited pain points. In building this out with LangGraph I found a few similar issues, but overall I like the direction and I'm happy with my choice to use it.
With that said, there are several other interesting projects in this space that are worth calling out, and I hope to spend more time building with them soon:
Clone the repository:
git clone https://github.com/JoshuaC215/agent-service-toolkit.git
cd agent-service-toolkit
Set up environment variables:
Create a .env
file in the root directory. At least one LLM API key or configuration is required. See the .env.example
file for a full list of available environment variables, including a variety of model provider API keys, header-based authentication, LangSmith tracing, testing and development modes, and OpenWeatherMap API key.
You can now run the agent service and the Streamlit app locally, either with Docker or just using Python. The Docker setup is recommended for simpler environment setup and immediate reloading of the services when you make changes to your code.
This project includes a Docker setup for easy development and deployment. The compose.yaml
file defines two services: agent_service
and streamlit_app
. The Dockerfile
for each is in their respective directories.
For local development, we recommend using docker compose watch. This feature allows for a smoother development experience by automatically updating your containers when changes are detected in your source code.
Make sure you have Docker and Docker Compose (>=2.23.0) installed on your system.
Build and launch the services in watch mode:
docker compose watch
The services will now automatically update when you make changes to your code:
pyproject.toml
or uv.lock
files, you will need to rebuild the services by running docker compose up --build
.Access the Streamlit app by navigating to http://localhost:8501
in your web browser.
The agent service API will be available at http://localhost:80
. You can also use the OpenAPI docs at http://localhost:80/redoc
.
Use docker compose down
to stop the services.
This setup allows you to develop and test your changes in real-time without manually restarting the services.
You can also run the agent service and the Streamlit app locally without Docker, just using a Python virtual environment.
Create a virtual environment and install dependencies:
pip install uv
uv sync --frozen
source .venv/bin/activate
Run the FastAPI server:
python src/run_service.py
In a separate terminal, run the Streamlit app:
streamlit run src/streamlit_app.py
Open your browser and navigate to the URL provided by Streamlit (usually http://localhost:8501
).
The agent supports LangGraph Studio, a new IDE for developing agents in LangGraph.
You can simply install LangGraph Studio, add your .env
file to the root directory as described above, and then launch LangGraph studio pointed at the root directory. Customize langgraph.json
as needed.
Currently the tests need to be run using the local development without Docker setup. To run the tests for the agent service:
Ensure you're in the project root directory and have activated your virtual environment.
Install the development dependencies and pre-commit hooks:
pip install uv
uv sync --frozen
pre-commit install
Run the tests using pytest:
pytest
To customize the agent for your own use case:
src/agents
directory. You can copy research_assistant.py
or chatbot.py
and modify it to change the agent's behavior and tools.agents
dictionary in src/agents/agents.py
. Your agent can be called by /<your_agent_name>/invoke
or /<your_agent_name>/stream
.src/streamlit_app.py
to match your agent's capabilities.The repo includes a generic src/client/client.AgentClient
that can be used to interact with the agent service. This client is designed to be flexible and can be used to build other apps on top of the agent. It supports both synchronous and asynchronous invocations, and streaming and non-streaming requests.
See the src/run_client.py
file for full examples of how to use the AgentClient
. A quick example:
from client import AgentClient
client = AgentClient()
response = client.invoke("Tell me a brief joke?")
response.pretty_print()
# ================================== Ai Message ==================================
#
# A man walked into a library and asked the librarian, "Do you have any books on Pavlov's dogs and Schrödinger's cat?"
# The librarian replied, "It rings a bell, but I'm not sure if it's here or not."
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.