-
I've run the benchmark_genai.py for CPU, GPU, NPU on MTL U9, here is the logs:
(env_ov_genai) c:\AIGC\openvino\openvino.genai\samples\python\benchmark_genai>python benchmark_genai.py -m c:\AIGC\openv…
-
### OpenVINO Version
2024.3
### Operating System
Ubuntu 20.04 (LTS)
### Device used for inference
GPU
### Framework
None
### Model used
TinyLlama/TinyLlama-1.1B-Chat-v1.0
### Issue descripti…
-
**Describe the bug**
Building Java API and use the generated artifacts in another application however I got the below error while using the sample `SimpleGenAI` class.
```
Exception in thread "m…
-
### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Gemini now allows a developer to create a co…
-
Excerpts track metadata for `genai`, at levels "none", "some", "most", and "all". Several examples created in 2024 used "some" GenAI,
https://github.com/awsdocs/aws-doc-sdk-examples-tools/blob/mai…
-
### Description
When attempting to run a Whisper model on NPU, an error occurs indicating that the shape is dynamic. This prevents the model from being executed on the NPU. Is there any example to en…
-
Hello, we're looking at the license and terms of use for WhatsApp Business when using Llama 3.1 to generate images - could you please point us to the right license file / connect us with the Meta Lega…
-
As a user, I would like to be able to interact with a GenAI powered chat like interface, in any topology view (i.e., general topology, incident topology or specific alert topology).
It can use gene…
-
### 🐛 Describe the bug
We are running exercise on the [newly launched benchmarking infra](https://github.com/pytorch/executorch/tree/main/extension/benchmark) with the in-tree enabled models under `e…
-
Dears,
I failed to run Llama-2-7b-chat-hf on NPU, please give me a hand.
1. I converted the mode by below command, and got two models,
a) optimum-cli export openvino --task text-generation -m Meta-…