oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
38.69k stars 5.1k forks source link

Illegal instruction (core dumped) after update #6210

Open NXTler opened 3 weeks ago

NXTler commented 3 weeks ago

Describe the bug

Greeting, after I updated my system today, I couldn't start text-generation-webui again. The only message I got is Illegal instruction (core dumped) right after the start. I'm only running on CPU, two E5 2760 v2 to be precise.

Is there an existing issue for this?

Reproduction

I'm not sure if it's easily reproducible. This might be a problem with my old CPUs not supporting AVX2, similar to #4053.

Screenshot

No response

Logs

22:28:48-451673 INFO     Starting Text generation web UI                                                                                                      
22:28:48-456501 WARNING                                                                                                                                       
                         You are potentially exposing the web UI to the entire internet without any access password.                                          
                         You can create one with the "--gradio-auth" flag like this:                                                                          

                         --gradio-auth username:password                                                                                                      

                         Make sure to replace username:password with your own.                                                                                
22:28:48-458632 INFO     Loading settings from "Litrionite.yaml"                                                                                              
22:28:48-526748 INFO     Loading "dolphin-2.7-Q5.gguf"                                                                                                        
22:28:48-581421 INFO     llama.cpp weights detected: "models/dolphin-2.7-Q5.gguf"                                                                             
Illegal instruction (core dumped)

System Info

I'm running on Linux Mint with the newest updates and python 3.10.12, all requirements return satisfied.
9600- commented 3 weeks ago

Adding my experience here. Having the same issue after updating to 1.9.

System is a 2P E2697v2, 256GB DDR3, 4xP40, 1x3090, running 550.90.07 and Ubuntu 22.04.4

NVIDIA-SMI version  : 550.90.07
NVML version        : 550.90
DRIVER version      : 550.90.07
CUDA Version        : 12.4
processor   : 47
vendor_id   : GenuineIntel
cpu family  : 6
model       : 62
model name  : Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz
stepping    : 4
microcode   : 0x42e
cpu MHz     : 1200.000
cache size  : 30720 KB
physical id : 1
siblings    : 24
core id     : 13
cpu cores   : 12
apicid      : 59
initial apicid  : 59
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
vmx flags   : vnmi preemption_timer posted_intr invvpid ept_x_only ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple
bugs        : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips    : 5404.81
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:
9600- commented 3 weeks ago

Confirmed the rollback to llama_cpp_python_cuda-0.2.79 worked.

9600- commented 3 weeks ago

It’s possible for me to build llama_cpp_python_cuda-0.2.81 using CMAKE_ARGS="-DLLAVA_BUILD=OFF" and my backend of choice. However, I still have issues loading models.

This seems to be related to two issues in llama_cpp_python

9600- commented 2 weeks ago

Issue appears to have been resolved with the latest commits.

Sobhanysiamak commented 2 weeks ago

The issue appears to have been resolved with the latest commits.