Quansight / open-gpu-server

The Open GPU Server for CI purpose.
8 stars 12 forks source link

open-gpu-server

This repository provides information about the OpenStack instance Quansight and MetroStar are providing to conda-forge and other communities.

Access

The main intent of this service is to provide GPU CI to those conda-forge feedstocks that require it. To do so:

Incidents

If you suspect the server is not operating as expected, please check:

If you think there should be an open incident report but there's none, please open a new issue and tag @Quansight/open-gpu-server so the team can take a look. Thanks!

Base configuration

Available runners

The server can spin up VMs with the following configurations:

GPU runners

Name vCPUs RAM Disk GPUs
gpu_tiny 4 2GB 20GB 1x NVIDIA Tesla V100
gpu_medium 4 8GB 50GB 1x NVIDIA Tesla V100
gpu_large 4 12GB 60GB 1x NVIDIA Tesla V100
gpu_xlarge 8 16GB 60GB 1x NVIDIA Tesla V100
gpu_2xlarge 8 32GB 60GB 1x NVIDIA Tesla V100
gpu_4xlarge 8 64GB 60GB 1x NVIDIA Tesla V100

These runners use the ubuntu-2204-nvidia-20230914 image.

CPU runners

Name vCPUs RAM Disk
ci_medium 4 8GB 60GB
ci_large 4 12GB 60GB
ci_xlarge 4 32GB 60GB
ci_2xlarge 8 32GB 60GB
ci_4xlarge 8 64GB 60GB

These runners use the ubuntu-2204-20231018 image.

Software

These runners run ISOs derived from Ubuntu 22.04. Images are built with the instructions provided in the images folder.

Limitations

Support

This service is provided as is, with no guarantees of uptime or support. If you have any questions, please open an issue in this repository and we'll try our best.