-
- https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference
- https://pub.dev/packages/mediapipe_genai
- https://github.com/google/flutter-mediapipe/discussions/62
-
Does it Energon-AI support this project for inference optimization?
-
对输入视频进行超分处理后,视频分辨率变为810*720,但运行代码会出现报错ZeroDivisionError: division by zero,具体报错情况如下所示,希望可以得到解答,谢谢!!
Traceback (most recent call last):
File "D:\BaiduNetdiskDownload\MuseTalk\DUDU-Lab_MuseTalk_Win…
-
### 🥰 Feature Description
Please consider adding the ability to display the inference speed for each interaction with the AI model.
### 🧐 Proposed Solution
This could be presented in a f…
-
### Describe the bug
```
(ai) (base) yuki@yuki pho % python tts.py
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
> Downloading model to /Users…
-
# Title
Integration of open-source AI models in Galaxy (web-based scientific data analysis platform)
# Description
[Galaxy (Europe)](https://usegalaxy.eu/) is a web-based, open-source data anal…
-
Getting this error when running the inferrence
video_retalking) C:\Users\f\ai\video-retalking>python3 inference.py --face examples/face/3.mp4 --audio examples/audio/2.wav --outfile results/3_2.mp4
T…
-
Win10
显卡3080
CUDA 12.5
下载的README上的第三方windows整合包,点Generate后,大概500秒页面提示错误,控制台打印日志如下。
```
启动中,请耐心等待 bilibili@十字鱼 https://space.bilibili.com/893892
Already download the model.
Loads checkpoint b…
-
I've noticed that occasionally the agent will generate two distinct responses (LLM inference and TTS audio) for the same user input.
Interestingly, the second LLM inference isn't generated until af…
-
**Describe the bug**
Error during importing the detector model:
```
ERROR burn_import::logger: PANIC => panicked at /home/simon/Data1/GIT/Rust/burn/crates/burn-import/src/onnx/dim_inference.rs…