-
### The bug
Immich is running in Docker, using a Proxmox LXC as host. Hardware acceleration is turned on and set up correctly. The CPU is AMD GX-415GA with Radeon HD8330E as the GPU.
Running "t…
mshpp updated
5 months ago
-
Is it due to mel.n_len3000 that is the max of a single inference? If you feed some of the longers samples that whisper.cpp uses I presume its the mel.n_len3000 max as know they are much longer.
``…
-
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.16.1
### Custom code
No
### OS platform and dist…
-
I am currently carrying out performance tests with TF Lite and pytorch_lite in Flutter (I should be able to give more info in the future, if anyone is interested).
My question is: does pytorch_lite…
-
Enhance Oauth 2 recommendations based on evaluationing the following RFC: https://tools.ietf.org/html/rfc8252
-
## Description
Trying to perform predictions using `intfloat/multilingual-e5-small` fails on a machine with a GPU. This used to work in DJL 0.26.0 using PY_TORCH 2.0.1 but now fails on 0.28.0 (and p…
-
### The bug
I just updated to 1.120.0 and now the immich-server container restarts every 60 seconds. How can I check whats going wrong?
### The OS that Immich Server is running on
Debian 12
### …
-
**Describe the issue**
RTMdet-ins inference fp16 model is slow than the document said. It's only `Throughput 104.89 qps` in my case, not equal to `1.93ms` latency.
**Reproduction**
1. What co…
-
Hi,
I would like to replace the pose estimator with a lightweight implementation of OpenPose, based only on CPU inference since my computer does not support CUDA.
However when I try to compile t…
-
### 🐛 Describe the bug
Currently I'm trying to test LLaMA 3.2 3B Instruct Model as you guided.
but, I faced some issues during pte generation for LLaMA 3.2 3B Instruct Model with QNN @ On Device sid…