-
Can we support NPU acceleration library, NPU inference model save/load in low bits?
It takes about 48s to load the 7B model directly.
-
### 🚀 The feature, motivation and pitch
https://blackforestlabs.ai/#get-flux
FLUX models are the new SOTA opensource text-to-image model. I am wondering if this slightly different architecture mod…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
### #
- [X] I have searched the existing issues
### Current behavior
I was playing with Jan for the first time and realised that GPU acceleration wasn't enabled.
I toggled the "GPU Acceleration" s…
-
I did some research on how it can be estimated based on vertical acceleration.
There are two approaches.
One is by plugging acceleration into trochoidal wave model, which allows to calculate dis…
-
I'm currently inspecting the code for the deceleration behaviour of the "CC human driver" for the ALKS deceleration scenario. I took the Reg 157 and the JAMA paper (https://www.grcc.vip/article-7247.h…
-
Hi! Very impressive project!
My main goal is to export the model to intermediate format and test accelerability on many platforms. I am trying to accelerate the assembled convolution module for be…
-
## Goal
- Cortex can generate a model compatibility prediction, based on user's hardware and `model.yaml`
- This should be an API that Jan can call (potentially as part of `GET /models` and `GET /…
-
### Describe the feature
Are there any plans to support accelerated training of the StableDiffusion3 algorithm model?
-
**Describe the bug**
i using the smoothquant and gptq to quantify Qwen2-0.5B-Instruct, but the model size has increased, which from 0.94G to 1.85G
**Expected behavior**
A clear and concise descri…