Open kheyer opened 3 months ago
In an ideal world the engine (or nvidia driver) would manage this as a pool of resources, just like you can declare a service to bind port with a range, and let the engine select available port in the range, so that scaling is not an issue.
About #9153, I would not be comfortable we rely on container number used to index replicas. While we try to make this somehow sequential there are many corner cases and no guarantee you would always get value within the [1..GPU_COUNT] interval
Currently this is the best solution I've found solely using the docker compose file and not different environment variables in each container. But something akin to the way port ranges are assigned would be nice.
services:
# using yaml fragment solution for multiple GPU selection at docker level - https://docs.docker.com/reference/compose-file/fragments/
thingy-1: &default-service
image: ubuntu
build: .
restart: always
command: [ "nvidia-smi" ]
# ports:
# - "8000-8004:8000"
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["0"]
capabilities: [gpu]
thingy-2:
<<: *default-service
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["1"]
capabilities: [gpu]
thingy-3:
<<: *default-service
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["2"]
capabilities: [gpu]
Description
I have a service that runs on a single GPU. If I have multiple GPUs available, I would like to create one replica of this service for each GPU available.
Currently, I can only do this by explicitly creating the service multiple times in the docker-compose file and changing the
device_ids
section of the resourcesHaving to create 8 near-identical replicas of the same service in the config file is unwieldy.
I would like to specify the service once and set the replicas with
However this requires some way for each replica to know what GPU to use. My understanding is currently there is no way to do this, and there isn't a good way to go off a "replica index" for each container (see https://github.com/docker/compose/issues/9153).
Some method of mapping replicas of a service to different GPUs would be helpful.