[]() []() []()
OllaLab-Lean is a lean stack designed to help both novice and experienced developers rapidly set up and begin working on LLM-based projects. This is achievable via simplified environment configuration and a cohesive set of tools for Research and Development (R&D). The project includes several key components.
Monitoring and Logging tools such as: Elastic Search, Kibana, Grafana, Prometheus.
OllaLab-Lean supports most LLM and Data Science R&D activities. A few sample use cases are:
[!IMPORTANT] Please refer to the project's Security Note for basic threat model and recommended proper uses.
You should be familiar with the command line interface, have Docker or Podman, Git, and other supporting CLI tools installed. If you are planning to use nVidia GPUs, you should have installed all nVidia supporting software. We will provide a detailed pre-installation instruction focusing on nVidia supporting stack at a later time.
For installing Docker, please check out Installing Docker Desktop on Windows or Installing Docker Desktop on MAC.
For installing Podman, please check out Podman Desktop Download and follow the Podman's installation instructions to properly set up both Podman and Podman Compose.
[!IMPORTANT] You need to copy env.sample file to .env and set the default passwords for the services listed in that file. On Mac, you may have to open a terminal, run command "cp env.sample .env" and then "nano .env" to be able to edit the file.
[!NOTE] If you just want to install the Streamlit Apps without installing the whole OllaLab stack, you can go to OllaLab - Streamlit Apps for more guidance.
The below installation steps passed the test for AMD64 architecture, 12GB nVidia GPU, and Docker Compose for Windows on WSL2.
Test for Docker and Docker Compose with the following commands
docker --version
docker info
docker-compose --version
Test for Podman and Podman Compose with the following commands
podman version
podman compose version
To clean up Docker
docker system prune -f
docker rmi -f $(docker images -a -q)
To clean up Podman
podman container prune
podman image prune
podman volume prune
podman network prune
There are also Podman System Prune
git clone https://github.com/GSA/FedRAMP-OllaLab-Lean.git
If you are using Podman, you can skip this step.
If you are using Docker Compose:
docker-compose build
docker-compose build --no-cache
NOTE: If you use podman-compose
in place of docker-compose
, you will need to explicitly configure podman-compose
to interpret the Dockerfile
s as the Docker format, not the standard OCI format, to properly to process CMD-SHELL
, HEALTHCHECK
, and SHELL
directives by running the commands like below.
podman compose --podman-build-args='--format docker' build
podman compose --podman-build-args='--format docker' build --no-cache
The below commands are for Docker Compose. If you use Podman, substitude "docker-compose" with "podman compose"
Run the stack with Default Services only (recommended for the lean-est stack)
docker-compose up
Run the stack with Default Services and Monitoring Services
docker-compose --profile monitoring up
Run the stack with Default Services and Logging Services
docker-compose --profile logging up
Run the stack with Default Services, Monitoring Services, and Logging Services
docker-compose --profile monitoring --profile logging up
Your running stack should look similar to this
In Docker Desktop
In Podman Desktop
If you are using Docker Desktop, you can click on the Ollama instance and get to the "Exec" tab to get to the instance CLI. If you are using Podman Desktop, choose Containers tab, click "Ollama" container, and then choose the "Terminal" tab.
In the CLI, run:
ollama pull llama3.1:8b
A successful model pull looks similar to this in Podman
After it is done, run the following command and verify if llama3.1:8b was successfully pulled.
ollama list
You may pull other models and interact with them via the CLI. However, llama3.1:8b must be pulled for the provided Streamlit apps to work. In the next release, we will allow the Streamlit apps to ask you for which LLMs you want to work with.
Go to localhost:8501/Simple_Chat to chat with the LLM. Please note:
Git submodules allow you to keep a Git repository as a subdirectory of another Git repository. Git submodules are simply a reference to another repository at a particular snapshot in time. Currently, OllaLab leverages submodules for sample datasets in streamlit_app/app/datasets
After cloning this repository, you can initialize and update the submodules with
git submodule update --init --recursive
If the submodules get updates, you can pull changes in the submodules and then commit those changes in your main repository.
cd submodules/some-repo
git pull origin main
cd ../..
git add .
git commit -m "Updated some-repo"
git push origin main
To add a submodule, use the following command:
git submodule add https://github.com/other-user/other-repo.git local_path/to/submodule
git submodule update --init --recursive
OllaLab_Lean/
āāā docker-compose.yml # Main Docker Compose file
āāā env.sample # Sample .env file, need to be changed to .env with proper values
āāā images/ # Relevant charts and images
āāā jupyter_lab/
ā āāā Dockerfile
ā āāā notebooks/
ā ā āāā *.ipynb # Curated notebooks for LLM R&D
ā āāā requirements.txt
āāā prompt-templates/ # Prompt templates for LLM-driven R&D
āāā streamlit_app/
ā āāā Dockerfile
ā āāā app/
ā ā āāā main.py # Streamlit app home
ā ā āāā data_unificator # Data Unificator app folder
ā ā āāā pages/
ā ā āāā Data_Unificator # App to merge data source files
ā ā āāā folder_chat/ # Storing folders created by Folder Chat app
ā ā āāā Folder_Chat.py # App to chat with a folder's content
ā ā āāā API_Chat.py # App to chat with requested API data (underdevelopment)
ā ā āāā Simple_Chat.py # App to chat
ā ā āāā Git_Chat.py # Chat with a git repository
ā āāā requirements.txt
āāā ollama/ # LLM management and inference API
āāā monitoring/
ā āāā prometheus/
ā āāā grafana/
āāā logging/
ā āāā elasticsearch/
ā āāā logstash/
ā āāā kibana/
āāā tests/
āāā scripts/
ā āāā firewall_rules.sh # Host-level firewall configurations
āāā .gitignore
We welcome contributions to OllaLab-Lean especially on the Planned Items!
Please see our Contributing Guide for more information on how to get started.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project and/or cloning the project, you agree to abide by its terms.