atoma-network / atoma-proxy

Atoma's proxy service repository
Apache License 2.0
0 stars 0 forks source link

Atoma Proxy infrastructure

Logo

[Discord] Twitter Documentation License

Introduction

Atoma Proxy is a critical component of the Atoma Network that enables:

This repository contains the proxy infrastructure that helps coordinate and optimize the Atoma Network's distributed compute resources. By deploying an Atoma proxy, you can:

  1. Help manage and distribute AI workloads efficiently across the network;
  2. Contribute to the network's reliability and performance;
  3. Support the development of a more resilient and scalable AI infrastructure.

Community Links

Deploying an Atoma Proxy

Install the Sui client locally

The first step in setting up an Atoma node is installing the Sui client locally. Please refer to the Sui installation guide for more information.

Once you have the Sui client installed, locally, you need to connect to a Sui RPC node to be able to interact with the Sui blockchain and therefore the Atoma smart contract. Please refer to the Connect to a Sui Network guide for more information.

You then need to create a wallet and fund it with some testnet SUI. Please refer to the Sui wallet guide for more information. If you are plan to run the Atoma node on Sui's testnet, you can request testnet SUI tokens by following the docs.

Docker Deployment

Prerequisites

Quickstart

  1. Clone the repository
git clone https://github.com/atoma-network/atoma-proxy.git
cd atoma-proxy
  1. Configure environment variables by creating .env file, use .env.example for reference:
POSTGRES_DB=<YOUR_DB_NAME>
POSTGRES_USER=<YOUR_DB_USER>
POSTGRES_PASSWORD=<YOUR_DB_PASSWORD>

TRACE_LEVEL=info
  1. Configure config.toml, using config.example.toml as template:
[atoma_sui]
http_rpc_node_addr = "https://fullnode.testnet.sui.io:443"                              # Current RPC node address for testnet
atoma_db = "0x741693fc00dd8a46b6509c0c3dc6a095f325b8766e96f01ba73b668df218f859"         # Current ATOMA DB object ID for testnet
atoma_package_id = "0x0c4a52c2c74f9361deb1a1b8496698c7e25847f7ad9abfbd6f8c511e508c62a0" # Current ATOMA package ID for testnet
toma_package_id = "0xd992f4c5bfb563a9a1ce503edb6bf518f20c52363ca4a18715f251eb2bdae3e0"  # Current TOMA package ID for testnet
request_timeout = { secs = 300, nanos = 0 }                                             # Some reference value
max_concurrent_requests = 10                                                            # Some reference value
limit = 100                                                                             # Some reference value
sui_config_path = "~/.sui/sui_config/client.yaml"                                       # Path to the Sui client configuration file, by default (on Linux, or MacOS)
sui_keystore_path = "~/.sui/sui_config/sui.keystore"                                    # Path to the Sui keystore file, by default (on Linux, or MacOS)
cursor_path = "./cursor.toml"

[atoma_state]
# URL of the PostgreSQL database, it SHOULD be the same as the `ATOMA_STATE_DATABASE_URL` variable value in the .env file
database_url = "postgresql://POSTGRES_USER:POSTGRES_PASSWORD@db:5432/POSTGRES_DB"

[atoma_service]
service_bind_address = "0.0.0.0:8080" # Address to bind the service to
password = "password" # Password for the service
models = [
  "meta-llama/Llama-3.2-3B-Instruct",
  "meta-llama/Llama-3.2-1B-Instruct",
] # Models supported by proxy
revisions = ["main", "main"] # Revision of the above models
hf_token = "<YOUR_HF_TOKEN>" # Hugging face api token, required if you want to access a gated model
  1. Create required directories
mkdir -p data logs
  1. Start the containers with the desired inference services
# Build and start all services
docker compose up --build

# Or run in detached mode
docker compose up -d --build

Container Architecture

The deployment consists of two main services:

Service URLs

Volume Mounts

Managing the Deployment

Check service status:

docker compose ps

View logs:

# All services
docker compose logs

# Specific service
docker compose logs atoma-proxy

# Follow logs
docker compose logs -f

Stop services:

docker compose down

Troubleshooting

  1. Check if services are running:
docker compose ps
  1. Test Atoma Proxy service:
curl http://localhost:8080/health
  1. View container networks:
docker network ls
docker network inspect atoma-network

Security Considerations

  1. Firewall Configuration
# Allow Atoma Proxy port
sudo ufw allow 8080/tcp
  1. HuggingFace Token
  1. Sui Configuration