-
Hi I was wondering if there was any support for CPU inferences. The sample script from hubconf.py doesn't run even if after all the code instructing tensors and models to move to cuda were removed per…
-
### Status Updates
2024-08-22 :: [Comment added](https://github.com/ultralytics/ultralytics/issues/15049#issuecomment-2305870565)
2024-09-01 :: [Windows PyTorch 2.4.0 install blocked](https://gi…
-
How to modify project to be able to do predictions on CPU (on machine without GPU or pytorch-cuda configuration)? Please help
-
From the [docs](https://ludwig.ai/latest/getting_started/serve/), Ludwig spawns a REST API for inference. By default, this happens on a GPU.
However, is there any option to do this using CPU only f…
-
-
There are several projects aiming to make inference on CPU efficient.
The first part is research:
- Which project works better,
- And compatible with Refact license,
- And doesn't bloat the dock…
-
xpu and cpu Intel images referenced in documentation do not exist:
* https://huggingface.co/docs/text-generation-inference/en/installation_intel
* https://github.com/huggingface/text-generation-infe…
-
I have fine tuned "meta-llama-3.1-8b-bnb-4bit" model using unsloth. I have downloaded the lora weights and able to do inferencing using those on Colab GPU.
But i want use this fine tuned model for …
-
我将官网的yolov8n-seg.pt和yolov8s-seg.pt模型下载下来,使用如下代码生成yolov8n-seg.onnx和yolov8s-seg.onnx文件
```
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8s-seg.pt") # load an official model
…
-
### Question
I was trying to run LLava inference on cpu, but it complains "Torch not compiled with CUDA enabled". I noticed that cuda() is called when loading model. If I remove all the cuda() invoc…