meta-llama / llama-stack

Model components of the Llama Stack APIs
MIT License
3.69k stars 495 forks source link

Configuring ollama build with redis results in an error #189

Closed clearstorm-tech closed 1 week ago

clearstorm-tech commented 1 week ago

Configuring ollama build with redis results in an error.

   llama stack run ollama-stack --port 5000

   Resolved 8 providers in topological order
    Api.models: routing_table
    Api.inference: router
    Api.shields: routing_table
    Api.safety: router
    Api.memory_banks: routing_table
    Api.memory: router
    Api.agents: meta-reference
    Api.telemetry: meta-reference

  Initializing Ollama, checking connectivity to server...
  [faiss] Registering memory bank routing keys: ['vector']

  ... 

  AttributeError: 'RedisKVStoreConfig' object has no attribute 'url'
  Error occurred in script at line: 42

  version: v1
  image_name: ollama-stack
  docker_image: null
  conda_env: ollama-stack
  apis_to_serve:
  - shields
  - safety
  - memory
  - memory_banks
  - inference
  - agents
  - models
  api_providers:
    inference:
      providers:
      - remote::ollama
    memory:
      providers:
      - meta-reference
    safety:
      providers:
      - meta-reference
    agents:
      provider_type: meta-reference
      config:
        persistence_store:
          namespace: llama-stack
          type: redis
          host: localhost
          port: 6379
    telemetry:
      provider_type: meta-reference
      config: {}
  routing_table:
    inference:
    - provider_type: remote::ollama
      config:
        host: localhost
        port: 11434
      routing_key: Llama3.1-8B-Instruct
    memory:
    - provider_type: meta-reference
      config: {}
      routing_key: vector
    safety:
    - provider_type: meta-reference
      config:
        llama_guard_shield: null
        enable_prompt_guard: false
      routing_key:
      - llama_guard
      - code_scanner_guard
Minutis commented 1 week ago

Just had the same issue. It is not related to ollama. In general it's the issue if you choose Redis as a persistence store. I just submitted a PR to fix this. Locally it works, just not sure if this is the preferred approach of just build Redis client differently. Let's see what maintainers think.

cheesecake100201 commented 1 week ago

Had the same issue couple of days before. Need to add self.config.url = http://{host}:{port} in llama_stack/providers/utils/kvstore/redis/redis.py under RedisKVStoreImpl.