-
Based on readme, it sounds like we have to build a separate docker image for different model type.
so if we want to support sd2+, sdxl, sd3 models, do we need to build 3 separate docker images?
-
When running cog predict on large models (SDXL for example), users with slow internet connections, or far away from weight storage (Australia seems to be quite far from r8.im storage), experience time…
-
I am trying to use this worker with custom nodes. When I deploy using your docker image, all works smoothly on RunPod as expected. However, when I fork the repo, create a local image, then push and de…
-
I was trying to run a clean comfy instance with custom models and nodes. So I tried setting up the serverless endpoint using `timpietruskyblibla/runpod-worker-comfy:3.0.0-base` docker image, however i…
-
### Describe the bug
Due to network restrictions, I cannot use Xinference to pull models online. I downloaded the model weight of sdxl-turbo to the local computer, and then used Xinference (docker co…
-
I run `bentoml containerize`
### To reproduce
- I cloned the stable diffusion sample from bentoml guides
- Ran bentoml serve successfully
- Tried to run bentoml build and bentoml containeriz…
-
Hi, I found that I can't load my model to sagemaker because inside the docker, my model cause OOM while loading in side the sagemaker deploy docker.
I checked that with My EC2 instance inf2.xlarge …
-
i run this in the docker env ,this is the issues feed back
Traceback (most recent call last):
File "/home/StoryDiffusion/gradio_app_sdxl_specific_id_low_vram.py", line 2, in
import gradio a…
-
1. use cog-sdxl in https://github.com/replicate/cog-sdxl
2. run cog build to generage docker image
3. run docker locally use gpu
4. call the train interface usr curl
```
curl -X 'POST' \
…
-
When using ComfyUI and running run_with_gpu.bat, importing a JSON file may result in missing nodes. This issue can be easily fixed by opening the manager and clicking on "Install Missing Nodes," allow…