-
### OpenVINO Version
2024.4.0
### Operating System
Windows System
### Device used for inference
GPU
### Framework
None
### Model used
laion/CLIP-ViT-B-32-laion2B-s34B-b79K
…
-
cc @tpluscode @amivanoff
https://www.npmjs.com/search?q=shacl has a lot of useful tools.
Below is a rough list, but we need to find the git repos, and add all attributes needed by "awesome.
Add ca…
-
![0](https://github.com/user-attachments/assets/2b14aab2-019e-4f74-a78d-ef929471e9b5)
The inference of my model based on SyncTalk work perfectly fine, but after adapting this model to liveTalking cod…
-
Tasks that have been identified and scheduled:
+ Fine-tuning support for Diffusers version models
+ Adaptation for CPU / NPU inference frameworks (e.g., Huawei, Intel devices)
+ ComfyUI adaptat…
-
### OpenVINO Version
2024.0
### Operating System
Ubuntu 20.04 (LTS)
### Device used for inference
CPU
### Framework
None
### Model used
_No response_
### Issue descriptio…
vient updated
1 month ago
-
### 问题描述 Please describe your issue
`C++ Traceback (most recent call last):
--------------------------------------
0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1 std::un…
-
So faster-whisper is built using CTranslate2 and checking the CTranslate2 github, they say:
> "Multiple CPU architectures support
The project supports x86-64 and AArch64/ARM64 processors and int…
-
When building the project on Ubuntu 22.04, the CMake configuration fails if TBB is not installed.
```sh
# Install gtsam (GTSAM_WITH_TBB=OFF) and iridescence
# Configure
git clone https://githu…
-
### OpenVINO Version
2024.3
### Operating System
Windows System
### Device used for inference
NPU
### Framework
ONNX
### Model used
Mobilenetv3
### Issue description
M…
-
## Description
I tried to convert the Flux Dit model on L40S with TensorRT10.5, and found that the peak gpu memory exceeded 46068MiB, but 23597MiB gpu memory was occupied during inference. Is this n…