-
### OpenVINO Version
2022.3.2
### Operating System
Ubuntu 20.04 (LTS)
### Device used for inference
CPU
### Framework
ONNX
### Model used
distilbert / distilbert -base-uncas…
-
Thank you for sharing your work and resources.
While running the command python src/joint_training/generate_explanation_results.py, I noticed that it seems to require the use of ChatGPT. However, as …
-
ONNX has evolved into much more than just a specification for exchanging models. Here's a breakdown of why:
ONNX Runtime: A highly optimized inference engine that executes ONNX models. This activel…
-
### Bug Description
When trying to delete my whole model and all the charms the `kserve-operator` charm was left behind with the following status:
```
Unit Workload Agent Address…
-
Hello,
Compare to all the open source models for video generation , Allegro is the best model and framework i have found still now, Well Done!!
But i have one issue, it takes one hour for proce…
-
## What happened + What you expected to happen
When using the new `enable_env_runner_and_connector_v2` feature in RLlib, the `env_runners` do not have access to the GPU for inference on the env_runne…
-
I noticed the following statement in the README:
> The SenseVoice-Small model utilizes a non-autoregressive end-to-end framework, leading to exceptionally low inference latency.
So, what is the…
-
### 🚀 The feature, motivation and pitch
Rerank models are essential to RAG workflow. There are quite a few models available, such as jina-reranker-v2. Some inference frameworks already support rera…
-
Hi, I found that for the same checkpoints of IML-ViT, the inference results on CASIA v1 inferenced through this IMDB-IML-ViT framework is much lower (~12%) that computed within of the original code ba…
-
Traceback (most recent call last):
File "/home/chenghaonan/lsl/hallo2/scripts/inference_long.py", line 511, in
save_path = inference_process(command_line_args)
File "/home/chenghaonan/lsl/…