mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
25.19k stars 1.91k forks source link

Strange partial GPU support with localai/localai:latest-gpu-hipblas #3709

Closed freddybc closed 1 month ago

freddybc commented 1 month ago

LocalAI version: v2.21.1 (33b2d38dd0198d78dbc26aa020acfb6ff4c4048c) localai/localai:latest-gpu-hipblas

Environment, CPU architecture, OS, and Version: Docker version 27.3.1, build ce12230 running localai:latest-gpu-hipblas, on Ubuntu 22.04.5 LTS (6.11.0-x64v4-xanmod) AMD EPYC 9000 series CPU and AMD Radeon RX 7800 XT

Describe the bug: A weird behavior with the release image ( localai/localai:latest-gpu-hipblas = 2.21.1) where there is partial GPU functionality when using AIO defined models and specifying llama-backend-grpc.

BUT, nothing based on llama-cpp runs on the GPU. It just hangs with 100% CPU utilization at the stage of loading the model.

gpt-4 (text-to-text.yaml) with backend=llama-cpp-grpc

10:07PM INF Loading model 'Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf' with backend llama-cpp-fallback
10:07PM DBG Loading model in memory from file: /build/models/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
10:07PM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:43109): stderr llm_load_print_meta: max token length = 256
.... CPU 100%
Next step would be loading the model... which it never comes to. Just stays at 100% CPU forever.

The same behavior is observed for all combinations of
 - llama-cpp-fallback
 - llama-cpp-grpc
 - llama-cpp-hipblas

which is to be expected since its the same file. However, it is weird that the llama-cpp-fallback also locks up in exactly the same way given that it should run on the CPU, but, yet, is trying to run on the GPU.

Rebuilding local-ai does not solve it either. However, at least building the llama-cpp-fallback without BUILD_TYPE=hipblas, i.e., with make backend-assets/grpc/llama-cpp-fallback enables the CPU version of llama-cpp to run.

To Reproduce

There seems to be a bug in the docker image since it presents the user with an error unless adding the runtime rocm llvm openmp libraries (LD_LIBRARY_PATH=/opt/rocm/lib/llvm/lib) at start (openmp-extras-runtime package doesn't setup ldpaths)
--> ERROR libomp.so not found.

Fix start fix,
docker pull localai/localai:latest-gpu-hipblas
docker run -ti --rm \
  --privileged \
  -p 8080:8080 \
  -e DEBUG=true \
  -e LD_LIBRARY_PATH=/opt/rocm/lib/llvm/lib 
  --security-opt seccomp=unconfined \
  --device /dev/dri \
  --device /dev/kfd \
  --group-add video \
  -v /mnt/raid6/local-ai/models:/build/models \
  localai/localai:latest-gpu-hipblas

Expected behavior Since gfx1100,gfx1101 is supported, llama-cpp should run on the GPU. Since it is possible to run the entire HIP conformance test manully from the image, the GPU is definately working. As would the successful operation of whisper, embeddings, stablediffusion indicate.

Logs

12:34AM INF env file found, loading environment variables from file envFile=.env
12:34AM DBG Setting logging to debug
12:34AM INF Starting LocalAI using 64 threads, with models path: /build/models
12:34AM INF LocalAI version: v2.21.1 (33b2d38dd0198d78dbc26aa020acfb6ff4c4048c)
12:34AM DBG CPU capabilities: [3dnowprefetch abm adx aes amd_lbr_v2 amd_ppin aperfmperf apic arat avic avx avx2 avx512_bf16 avx512_bitalg avx512_vbmi2 avx512_vnni avx512_vpopcntdq avx512bw avx512cd avx512dq avx512f avx512ifma avx512vbmi avx512vl bmi1 bmi2 bpext cat_l3 cdp_l3 clflush clflushopt clwb clzero cmov cmp_legacy constant_tsc cpb cppc cpuid cqm cqm_llc cqm_mbm_local cqm_mbm_total cqm_occup_llc cr8_legacy cx16 cx8 de debug_swap decodeassists erms extapic extd_apicid f16c flush_l1d flushbyasid fma fpu fsgsbase fsrm fxsr fxsr_opt gfni ht hw_pstate ibpb ibrs ibrs_enhanced ibs invpcid irperf la57 lahf_lm lbrv lm mba mca mce misalignsse mmx mmxext monitor movbe msr mtrr mwaitx nonstop_tsc nopl npt nrip_save nx ospke osvw overflow_recov pae pat pausefilter pcid pclmulqdq pdpe1gb perfctr_core perfctr_llc perfctr_nb perfmon_v2 pfthreshold pge pku pni popcnt pse pse36 rapl rdpid rdpru rdrand rdseed rdt_a rdtscp rep_good sep sha_ni skinit smap smca smep ssbd sse sse2 sse4_1 sse4_2 sse4a ssse3 stibp succor svm svm_lock syscall tce topoext tsc tsc_scale umip user_shstk v_spec_ctrl v_vmsave_vmload vaes vgif vmcb_clean vme vmmcall vnmi vpclmulqdq wbnoinvd wdt x2apic x2avic xgetbv1 xsave xsavec xsaveerptr xsaveopt xsaves xtopology]
12:34AM DBG GPU count: 2
12:34AM DBG GPU: card #0  [affined to NUMA node 0]@0000:c6:00.0 -> driver: 'ast' class: 'Display controller' vendor: 'ASPEED Technology, Inc.' product: 'ASPEED Graphics Family'
12:34AM DBG GPU: card #1  [affined to NUMA node 0]@0000:03:00.0 -> driver: 'amdgpu' class: 'Display controller' vendor: 'Advanced Micro Devices, Inc. [AMD/ATI]' product: 'unknown'
1

Chat with gpt-4

12:37AM INF Loading model 'Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf' with backend llama-cpp-hipblas
12:37AM DBG Loading model in memory from file: /build/models/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
12:37AM DBG Loading Model Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf with gRPC (file: /build/models/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf) (backend: llama-cpp-hipblas): {backendString:llama-cpp-hipblas model:Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf threads:64 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0002d0248 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
12:37AM DBG Sending chunk: {"created":1727743028,"object":"chat.completion.chunk","id":"d9f054b3-a7da-4182-a029-50adc06cfb91","model":"gpt-4","choices":[{"index":0,"finish_reason":"","delta":{"role":"assistant","content":""}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

12:37AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-hipblas
12:37AM DBG GRPC Service for Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf will be running at: '127.0.0.1:45961'
12:37AM DBG GRPC Service state dir: /tmp/go-processmanager4242018130
12:37AM DBG GRPC Service Started
12:37AM DBG Wait for the service to start up
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr I0000 00:00:1727743028.737585   46198 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache, work_serializer_dispatch
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr I0000 00:00:1727743028.737930   46198 ev_epoll1_linux.cc:125] grpc epoll fd: 3
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr I0000 00:00:1727743028.738072   46198 server_builder.cc:392] Synchronous server. Num CQs: 1, Min pollers: 1, Max Pollers: 2, CQ timeout (msec): 10000
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr I0000 00:00:1727743028.739495   46198 ev_epoll1_linux.cc:359] grpc epoll fd: 5
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr I0000 00:00:1727743028.739835   46198 tcp_socket_utils.cc:634] TCP_USER_TIMEOUT is available. TCP_USER_TIMEOUT will be used thereafter
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stdout Server listening on 127.0.0.1:45961
12:37AM DBG GRPC Service Ready
12:37AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf ContextSize:8192 Seed:936280953 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:64 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/build/models/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /build/models/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf (version GGUF V3 (latest))
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   0:                       general.architecture str              = llama
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   1:                               general.name str              = Hermes-2-Pro-Llama-3-8B
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   2:                          llama.block_count u32              = 32
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  10:                          general.file_type u32              = 15
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128288
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128288]  = ["!", "\"", "#", "$", "%", "&", "'", ...
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128288]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128003
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 128001
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {{bos_token}}{% for message in messag...
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - kv  22:               general.quantization_version u32              = 2
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - type  f32:   65 tensors
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - type q4_K:  193 tensors
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llama_model_loader: - type q6_K:   33 tensors
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_vocab: special tokens cache size = 288
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_vocab: token to piece cache size = 0.8007 MB
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: format           = GGUF V3 (latest)
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: arch             = llama
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: vocab type       = BPE
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_vocab          = 128288
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_merges         = 280147
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: vocab_only       = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_ctx_train      = 8192
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_embd           = 4096
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_layer          = 32
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_head           = 32
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_head_kv        = 8
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_rot            = 128
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_swa            = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_embd_head_k    = 128
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_embd_head_v    = 128
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_gqa            = 4
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_embd_k_gqa     = 1024
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_embd_v_gqa     = 1024
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: f_norm_eps       = 0.0e+00
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: f_clamp_kqv      = 0.0e+00
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: f_max_alibi_bias = 0.0e+00
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: f_logit_scale    = 0.0e+00
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_ff             = 14336
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_expert         = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_expert_used    = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: causal attn      = 1
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: pooling type     = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: rope type        = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: rope scaling     = linear
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: freq_base_train  = 500000.0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: freq_scale_train = 1
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: n_ctx_orig_yarn  = 8192
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: rope_finetuned   = unknown
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: ssm_d_conv       = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: ssm_d_inner      = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: ssm_d_state      = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: ssm_dt_rank      = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: ssm_dt_b_c_rms   = 0
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: model type       = 8B
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: model ftype      = Q4_K - Medium
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: model params     = 8.03 B
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: model size       = 4.58 GiB (4.89 BPW) 
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: general.name     = Hermes-2-Pro-Llama-3-8B
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: EOS token        = 128003 '<|im_end|>'
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: PAD token        = 128001 '<|end_of_text|>'
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: LF token         = 128 'Ä'
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: EOT token        = 128003 '<|im_end|>'
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: EOG token        = 128003 '<|im_end|>'
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: EOG token        = 128256 '<|eot_id|>'
12:37AM DBG GRPC(Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf-127.0.0.1:45961): stderr llm_load_print_meta: max token length = 256

CPU 100%..... forever.

Additional context

Have someone else seen something similar?

freddybc commented 1 month ago

The mystery is solved!

Docker does not seem to play nice with kernel 6.11 when it comes to dynamic power management.

Boot with kernel flag amdgpu.dpm=0 seems to solve the issue for llama-cpp.