Jeffser / Alpaca

An Ollama client made with GTK4 and Adwaita
https://jeffser.com/alpaca
GNU General Public License v3.0
419 stars 44 forks source link

youtube transcription doesn't work #217

Open loulou64490 opened 3 months ago

loulou64490 commented 3 months ago

Describe the bug everyime a past a video, Alpaca say me that the video doesn't have any transcription, whereas video have transcription

Expected behavior Alpaca get the transcription

Screenshots Capture d’écran du 2024-08-10 18-33-28 Capture d’écran du 2024-08-10 18-35-59

Additional context ligma

Jeffser commented 3 months ago

YouTube can be weird from time to time and not report transcriptions through the REST API, I will see what I can do

Jeffser commented 1 month ago

Hi, I changed a couple of times how YouTube attachment works, could you test if it works with the videos you want to use? I was going to try the video you mentioned in the issue but it seems it got deleted

loulou64490 commented 1 month ago

I was using a random video, and any video I tried doesn't work

Jeffser commented 1 month ago

Hi, I reworte the transcript system with a different library

https://github.com/Jeffser/Alpaca/commit/218c10f4ad25136bc8718afef1a892533f38a05c

This will hopefully work!

loulou64490 commented 1 month ago

bro

Capture d’écran du 2024-10-16 23-58-47

INFO    [main.py | main] Alpaca version: 2.7.0
INFO    [connection_handler.py | start] Starting Alpaca's Ollama instance...
INFO    [connection_handler.py | start] Started Alpaca's Ollama instance
2024/10/16 23:58:35 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/loulou/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-10-16T23:58:35.987+02:00 level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-10-16T23:58:35.988+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-16T23:58:35.988+02:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11435 (version 0.3.12)"
time=2024-10-16T23:58:35.988+02:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/home/loulou/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3128064882/runners
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz
time=2024-10-16T23:58:35.989+02:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz
INFO    [connection_handler.py | start] 
INFO    [connection_handler.py | request] GET : http://127.0.0.1:11435/api/tags
ERROR   [window.py | cb_text_received] HTTP Error 400: Bad Request
INFO    [window.py | show_toast] Error attaching video, please try again
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/loulou/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3128064882/runners/cpu/ollama_llama_server
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/loulou/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3128064882/runners/cpu_avx/ollama_llama_server
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/loulou/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3128064882/runners/cpu_avx2/ollama_llama_server
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/loulou/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3128064882/runners/cuda_v11/ollama_llama_server
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/loulou/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3128064882/runners/cuda_v12/ollama_llama_server
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/loulou/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3128064882/runners/rocm_v60102/ollama_llama_server
time=2024-10-16T23:58:46.880+02:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu]"
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-10-16T23:58:46.880+02:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
time=2024-10-16T23:58:46.880+02:00 level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/app/lib/ollama/libcuda.so* /app/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so* /usr/lib/sdk/llvm15/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/lib/ollama/libcuda.so* /app/plugins/AMD/lib/ollama/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-10-16T23:58:46.885+02:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[]
time=2024-10-16T23:58:46.885+02:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcudart.so*
time=2024-10-16T23:58:46.885+02:00 level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/app/lib/ollama/libcudart.so* /app/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so* /usr/lib/sdk/llvm15/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /usr/lib/ollama/libcudart.so* /app/plugins/AMD/lib/ollama/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /app/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2024-10-16T23:58:46.888+02:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/app/lib/ollama/libcudart.so.12.4.99 /app/lib/ollama/libcudart.so.11.3.109]"
cudaSetDevice err: 35
time=2024-10-16T23:58:46.888+02:00 level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/app/lib/ollama/libcudart.so.12.4.99 error="your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
cudaSetDevice err: 35
time=2024-10-16T23:58:46.888+02:00 level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/app/lib/ollama/libcudart.so.11.3.109 error="your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
time=2024-10-16T23:58:46.888+02:00 level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2024-10-16T23:58:46.888+02:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-10-16T23:58:46.888+02:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.2 GiB" available="23.9 GiB"
[GIN] 2024/10/16 - 23:58:46 | 200 |     588.887µs |       127.0.0.1 | GET      "/api/tags"
INFO    [connection_handler.py | request] POST : http://127.0.0.1:11435/api/show
[GIN] 2024/10/16 - 23:58:46 | 200 |   77.689539ms |       127.0.0.1 | POST     "/api/show"
Jeffser commented 1 month ago

Yeah that's just youtube blocking the request, try again (yes I hate it too and I can't automize the function)

loulou64490 commented 1 month ago

maybe my school network...