Run Kohya's GUI in a docker container locally or in the cloud.
[!NOTE]
These images do not bundle models or third-party configurations. You should use a provisioning script to automatically configure your container. You can find examples inconfig/provisioning
.
All AI-Dock containers share a common base which is designed to make running on cloud services such as vast.ai as straightforward and user friendly as possible.
Common features and options are documented in the base wiki but any additional features unique to this image will be detailed below.
The :latest
tag points to :latest-cuda
Tags follow these patterns:
:v2-cuda-[x.x.x]-base-[ubuntu-version]
:latest-cuda
→ :v2-cuda-12.1.1-base-22.04
:v2-rocm-[x.x.x]-core-[ubuntu-version]
:latest-rocm
→ :v2-rocm-6.0-core-22.04
Browse here for an image suitable for your target environment.
Supported Python versions: 3.10
Supported Platforms: NVIDIA CUDA
, AMD ROCm
Variable | Description |
---|---|
AUTO_UPDATE |
Update Kohya_ss on startup (default false ) |
KOHYA_ARGS |
Startup arguments |
KOHYA_PORT_HOST |
Kohya's GUI port (default 7860 ) |
KOHYA_REF |
Git reference for auto update. Accepts branch, tag or commit hash. Default: latest release |
KOHYA_URL |
Override $DIRECT_ADDRESS:port with URL for Kohya's GUI |
TENSORBOARD_ARGS |
Startup arguments (default --logdir /opt/kohya_ss/logs ) |
TENSORBOARD_PORT_HOST |
Tensorboard port (default 6006 ) |
TENSORBOARD_URL |
Override $DIRECT_ADDRESS:port with URL for Tensorboard |
See the base environment variables here for more configuration options.
Environment | Packages |
---|---|
kohya |
Kohya's GUI and dependencies |
This virtualenv will be activated on shell login.
See the base environments here.
The following services will be launched alongside the default services provided by the base image.
The service will launch on port 7860
unless you have specified an override with KOHYA_PORT
.
You can set startup arguments by using variable KOHYA_ARGS
.
To manage this service you can use supervisorctl [start|stop|restart] kohya_ss
.
The service will launch on port 6006
unless you have specified an override with TENSORBOARD_PORT
.
To manage this service you can use supervisorctl [start|stop|restart] tensorboard
.
[!NOTE] All services are password protected by default. See the security and environment variables documentation for more information.
Vast.ai
The author (@robballantyne) may be compensated if you sign up to services linked in this document. Testing multiple variants of GPU images in many different environments is both costly and time-consuming; This helps to offset costs