-
```
sparqles_backend-svc_1 | 14-10-24 21:00:34 [ INFO] sparqles.core.EndpointTask:57 - EXECUTE ATask(http://http://opendata.intellidomo.es:8080/sparql/)
sparqles_backend-svc_1 | 14-10-24 21:00:34 …
-
What a great job. When i use A800 80G with default parameters to infer the 768P video, I find that the GPU memory increases first and then decreases. The step inference is fine, but an OOM error is re…
-
We've dealt with OOM errors before, most prominently documented here in #157 (which was closed and re-opened and then closed again). The last time it was closed we had resolved the issue in #172 by r…
-
Getting OOM when using 2x4090.
Trying `t2v` using `save_memory=True`, `cpu_offloading=False`, `variant=diffusion_transformer_768p`, `inference_multigpu=True`, `bf16`.
Is the model expected to fi…
-
## Problem/Opportunity Statement
we will eventually enable memory limits for CI jobs. There is no current way to detect this in k8s/prometheus in our environment.
For example, I set `KUBERNETES_…
-
When I run your demo--bash scripts/attack_timestep.sh
,OOM problems occurs.:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB. GPU 0 has a total capacity of 23.69 GiB of…
-
## Description
We have a bug report from AllNodes about a Lighthouse node using 77GB of RAM when switching from `--prune-blobs false` to `--prune-blobs true`.
They helpfully shared a jemalloc me…
-
你好,我使用786X1024的图片作为底图,使用“workflow_FUN_I2V_GGUF_Q4_0.png”这个工作流,生成1024分辨率的视频会爆显存;768分辨率下没问题。
显卡:RTX 4060TI 16GB.
OS: Ubuntu22.04 Docker 中运行
vae encode tiling 已经打开
是不是因为我的底图太大了?
-
## Description
[Description of the issue]
## Steps to Reproduce
1. [First step]
2. [Second step]
3. [Third step]
4. [More steps as needed]
## Expected Behavior
[What you expected to …
-
24GB VRAM 3090, 32GB RAM
Is this almost expected behavior when changing a base model, as it happens 99% of the time? I've tried so many different combinations of settings and every one crashes. Thi…