Open mokeyish opened 10 hours ago
This is likely due to an overflow in the positions embeddings when exceeding the maximum sequence length supported by the model. Limiting the context size to the max sequence length supported by the model (which in this case seems to be 1024) should avoid the crash.
What happened?
After starting with the following command, it will occasionally crash suddenly while running.
llama-server -m ./bge-large-zh-v1.5 --port 3358 -a emb@bge-large-zh-v1.5 -ngl 100 -c 8192 --samplers tempera ture;top_p --embeddings -ub 8192 --pooling cls
Name and Version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes Device 1: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes Device 2: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes Device 3: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes version: 3945 (45f09764) built with cc (Debian 10.2.1-6) 10.2.1 20210110 for x86_64-linux-gnu
What operating system are you seeing the problem on?
Linux
Relevant log output