-
### 🚀 The feature, motivation and pitch
https://blackforestlabs.ai/#get-flux
FLUX models are the new SOTA opensource text-to-image model. I am wondering if this slightly different architecture mod…
-
Can we support NPU acceleration library, NPU inference model save/load in low bits?
It takes about 48s to load the 7B model directly.
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
**Describe the bug**
Can't train model on NPU. Once the model parameters are modified using optimizer, the subsequent output is always nan.
**To Reproduce**
Try running the following code.
```
…
-
**Describe the bug**
My CPU is Ultra 7 258v, and the system is Windows 11Home 24H2. I just tried running the qwen2.5-7b-instruct-model using your example code for the first time. However, I noticed t…
-
Can the model be quantized and uploaded independently to work in Colab T4 and 12 RAM
or
used with an acceleration library and devicemap=auto
support bitsandbyts convert it to 4bit
-
### Describe the feature
I tried the vLLM and LMDeploy using the following command:
```
python run.py \
--datasets humaneval_gen \
--hf-type chat \
--hf-path meta-llama/Meta-Llama-3-…
-
### Proposal
Investigate the use of physx.tensor.api for body acceleration calculations in IMU sensor using Using Isaac Sim 4.2. Previous versions of the accelerations had some bugs with regard to …
-
Hi Everyone,
After closing #2, a number of people have continued to have layer shift issues under various circumstances. I thought that maybe this was due to acceleration or jerk settings, but usin…
-
Supermium 126.0.6478.254 R4 on Windows XP SP3 fails to render models on posemy.art
**To Reproduce**
1. Use Supermium 126.0.6478.254 R4 on Windows XP SP3
2. Go to [https://posemy.art/app/?lang=en]…