PygmalionAI / aphrodite-engine

PygmalionAI's large-scale inference engine
https://pygmalion.chat
GNU Affero General Public License v3.0
884 stars 96 forks source link

[Bug]: Cannot load Mixtral GGUF model? #482

Open Nero10578 opened 3 months ago

Nero10578 commented 3 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (conda-forge gcc 11.3.0-19) 11.3.0
Clang version: Could not collect 
CMake version: version 3.29.3
Libc version: glibc-2.35
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: Tesla P40
GPU 1: Tesla P40

Nvidia driver version: 545.29.06
cuDNN version: Could not collect 
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      46 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             12
On-line CPU(s) list:                0-11
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz
CPU family:                         6
Model:                              85
Thread(s) per core:                 2
Core(s) per socket:                 6
Socket(s):                          1
Stepping:                           4
CPU max MHz:                        4500.0000
CPU min MHz:                        1200.0000
BogoMIPS:                           7399.70
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi md_clear flush_l1d arch_capabilities
Virtualization:                     VT-x
L1d cache:                          192 KiB (6 instances)
L1i cache:                          192 KiB (6 instances)
L2 cache:                           6 MiB (6 instances)
L3 cache:                           8.3 MiB (1 instance)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit:        KVM: Mitigation: VMX disabled
Vulnerability L1tf:                 Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:                  Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:             Mitigation; PTI
Vulnerability Mmio stale data:      Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed:             Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[conda] blas                      2.16                        mkl    conda-forge
[conda] libblas                   3.8.0                    16_mkl    conda-forge
[conda] libcblas                  3.8.0                    16_mkl    conda-forge
[conda] liblapack                 3.8.0                    16_mkl    conda-forge
[conda] liblapacke                3.8.0                    16_mkl    conda-forge
[conda] mkl                       2020.2                      256  
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] pytorch                   2.3.0           py3.11_cuda12.1_cudnn8.9.2_0    pytorch
[conda] pytorch-cuda              12.1                 ha16c6d3_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchtriton               2.3.0                     py311    pytorchROCM Version: Could not collect 
Aphrodite Version: 0.5.3
Aphrodite Build Flags:
CUDA Archs: Not Set; ROCm: Disabled

🐛 Describe the bug

It seems like its saying mixtral isn't supported? Is it just for GGUF?

INFO:     Extracting config from GGUF...
WARNING:  gguf quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO:     Using fp8 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance. But it may 
cause slight accuracy drop without scaling factors. FP8_E5M2 (without scaling) is only supported on cuda version greater than 
11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for common inference criteria.
2024-05-25 20:40:25,511 INFO worker.py:1749 -- Started a local Ray instance.
INFO:     Initializing the Aphrodite Engine (v0.5.3) with the following config:
INFO:     Model = '/home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf'
INFO:     Speculative Config = None
INFO:     DataType = torch.float16
INFO:     Model Load Format = auto
INFO:     Number of GPUs = 2
INFO:     Disable Custom All-Reduce = False
INFO:     Quantization Format = gguf
INFO:     Context Length = 8192
INFO:     Enforce Eager Mode = True
INFO:     KV Cache Data Type = fp8
INFO:     KV Cache Params Path = None
INFO:     Device = cuda
INFO:     Guided Decoding Backend = DecodingConfig(guided_decoding_backend='outlines')
INFO:     Converting tokenizer from GGUF...
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
INFO:     Using XFormers backend.
(RayWorkerAphrodite pid=28850) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
(RayWorkerAphrodite pid=28850) INFO:     Using XFormers backend.
INFO:     Aphrodite is using nccl==2.21.5
(RayWorkerAphrodite pid=28850) INFO:     Aphrodite is using nccl==2.21.5
INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
INFO:     reading GPU P2P access cache from /home/owen/.config/aphrodite/gpu_p2p_access_cache_for_0,1.json
(RayWorkerAphrodite pid=28850) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
(RayWorkerAphrodite pid=28850) INFO:     reading GPU P2P access cache from /home/owen/.config/aphrodite/gpu_p2p_access_cache_for_0,1.json
[rank0]: Traceback (most recent call last):
[rank0]:   File "<frozen runpy>", line 198, in _run_module_as_main
[rank0]:   File "<frozen runpy>", line 88, in _run_code
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 562, in <module>
[rank0]:     run_server(args)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 519, in run_server
[rank0]:     engine = AsyncAphrodite.from_engine_args(engine_args)
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 358, in from_engine_args
[rank0]:     engine = cls(engine_config.parallel_config.worker_use_ray,
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 323, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 429, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/aphrodite_engine.py", line 131, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:                           ^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/executor_base.py", line 39, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 45, in _init_executor
[rank0]:     self._init_workers_ray(placement_group)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 193, in _init_workers_ray
[rank0]:     self._run_workers(
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 309, in _run_workers
[rank0]:     driver_worker_output = getattr(self.driver_worker,
[rank0]:                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/worker.py", line 125, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/model_runner.py", line 179, in load_model
[rank0]:     self.model = get_model(
[rank0]:                  ^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/loader.py", line 103, in get_model
[rank0]:     model.load_weights(model_config.model, model_config.download_dir,
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 515, in load_weights
[rank0]:     for name, loaded_weight in hf_model_weights_iterator(
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 318, in hf_model_weights_iterator
[rank0]:     for name, param in convert_gguf_to_state_dict(model_name_or_path,
[rank0]:                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 246, in convert_gguf_to_state_dict
[rank0]:     raise RuntimeError(f"Unknown model_type: {model_type}")
[rank0]: RuntimeError: Unknown model_type: mixtral
(RayWorkerAphrodite pid=28850) ERROR:    Error executing method load_model. This might cause deadlock in distributed execution.[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
sgsdxzy commented 3 months ago

Anything other than llama needs to be converted first: https://github.com/PygmalionAI/aphrodite-engine/wiki/8.-Quantization#pre-convert-to-pytorch-state_dict-recommanded

Nero10578 commented 3 months ago

Anything other than llama needs to be converted first: https://github.com/PygmalionAI/aphrodite-engine/wiki/8.-Quantization#pre-convert-to-pytorch-state_dict-recommanded

Oh I read the quantization section in the wiki and I thought it said that it supported all GGUF by default now. So I didn't think it needs to be pre converted.

EDIT: Tried after converting, same error. Mixtral is just not supported for GGUF?

Traceback (most recent call last):
  File "/home/owen/aphrodite-engine/examples/gguf_to_torch.py", line 56, in <module>
    convert_save_model(args.input, args.unquantized_path, args.output,
  File "/home/owen/aphrodite-engine/examples/gguf_to_torch.py", line 18, in convert_save_model
    state_dict = convert_gguf_to_state_dict(checkpoint, config)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/owen/miniconda3/envs/aphro/lib/python3.11/site-packages/aphrodite/modeling/hf_downloader.py", line 246, in convert_gguf_to_state_dict
    raise RuntimeError(f"Unknown model_type: {model_type}")
RuntimeError: Unknown model_type: mixtral
sgsdxzy commented 3 months ago

We use the official mapping from llama.cpp to handle model arch: https://github.com/ggerganov/llama.cpp/blob/2b737caae100cf0ac963206984332e422058f2b9/gguf-py/gguf/constants.py#L204

Do you know what model arch is used in llama.cpp for mixtral?

sgsdxzy commented 3 months ago

btw you can always use exl2/gptq/awq, they are better supported and have better performance than gguf.

Nero10578 commented 3 months ago

We use the official mapping from llama.cpp to handle model arch: https://github.com/ggerganov/llama.cpp/blob/2b737caae100cf0ac963206984332e422058f2b9/gguf-py/gguf/constants.py#L204

Do you know what model arch is used in llama.cpp for mixtral?

I'm not sure, will have to look into that.

btw you can always use exl2/gptq/awq, they are better supported and have better performance than gguf.

You can only use GGUF on Pascal cards due to how other kernels use FP16 compute and Pascal cards have really low FP16 performance while they have decent FP32 performance.

Nero10578 commented 3 months ago

We use the official mapping from llama.cpp to handle model arch: https://github.com/ggerganov/llama.cpp/blob/2b737caae100cf0ac963206984332e422058f2b9/gguf-py/gguf/constants.py#L204

Do you know what model arch is used in llama.cpp for mixtral?

Seems to be MODEL_ARCH.LLAMA according to the changes in this PR? https://github.com/ggerganov/llama.cpp/pull/4406/files

sgsdxzy commented 3 months ago

https://github.com/PygmalionAI/aphrodite-engine/tree/fix/mixtral-gguf should have fixed gguf for mixtral, and you can now load them directly without pre-convertion. Please test if the branch works for you.

Nero10578 commented 2 months ago

https://github.com/PygmalionAI/aphrodite-engine/tree/fix/mixtral-gguf should have fixed gguf for mixtral, and you can now load them directly without pre-convertion. Please test if the branch works for you.

So minor change in hardware which works fine for everything else still too. I now am running on 4x GTX Titan X Pascal 12GB cards which should have still the same total 48GB as before.

With this branch I still seems to get the same error?

python -m aphrodite.endpoints.openai.api_server --model /home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --dtype=float16 --gpu-memory-utilization 0.9 --max-model-len 4096 --port 8000 --kv-cache-dtype fp8 --served-model-name Mixtral-8x7B-Instruct --max-num-seqs 4 --disable-log-requests --max-log-len 0 --enforce-eager true --tensor-parallel 4 --tokenizer /home/owen/models/Mixtral-GGUF
INFO:     Extracting config from GGUF...
WARNING:  gguf quantization is not fully optimized yet. The speed can be slower than non-quantized models.
WARNING:  Not found nvcc in /usr/local/cuda. Skip cuda version check!
INFO:     Using fp8 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance. But it may 
cause slight accuracy drop without scaling factors. FP8_E5M2 (without scaling) is only supported on cuda version greater than 
11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for common inference criteria.
2024-05-29 08:54:40,349 INFO worker.py:1749 -- Started a local Ray instance.
INFO:     Initializing the Aphrodite Engine (v0.5.3) with the following config:
INFO:     Model = '/home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf'
INFO:     Speculative Config = None
INFO:     DataType = torch.float16
INFO:     Model Load Format = auto
INFO:     Number of GPUs = 4
INFO:     Disable Custom All-Reduce = False
INFO:     Quantization Format = gguf
INFO:     Context Length = 4096
INFO:     Enforce Eager Mode = True
INFO:     KV Cache Data Type = fp8
INFO:     KV Cache Params Path = None
INFO:     Device = cuda
INFO:     Guided Decoding Backend = DecodingConfig(guided_decoding_backend='outlines')
INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
INFO:     Using XFormers backend.
(RayWorkerAphrodite pid=267204) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
(RayWorkerAphrodite pid=267204) INFO:     Using XFormers backend.
INFO:     Aphrodite is using nccl==2.20.5
(RayWorkerAphrodite pid=267153) INFO:     Aphrodite is using nccl==2.20.5
INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, 
specify disable_custom_all_reduce=True explicitly.
(RayWorkerAphrodite pid=267153) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
(RayWorkerAphrodite pid=267153) WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, 
(RayWorkerAphrodite pid=267153) specify disable_custom_all_reduce=True explicitly.
(RayWorkerAphrodite pid=267153) ERROR:    Error executing method load_model. This might cause deadlock in distributed execution.
[rank0]: Traceback (most recent call last):
[rank0]:   File "<frozen runpy>", line 198, in _run_module_as_main
[rank0]:   File "<frozen runpy>", line 88, in _run_code
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 562, in <module>
[rank0]:     run_server(args)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 519, in run_server
[rank0]:     engine = AsyncAphrodite.from_engine_args(engine_args)
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 358, in from_engine_args
[rank0]:     engine = cls(engine_config.parallel_config.worker_use_ray,
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 323, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 429, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/aphrodite_engine.py", line 131, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:                           ^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/executor_base.py", line 39, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 45, in _init_executor
[rank0]:     self._init_workers_ray(placement_group)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 193, in _init_workers_ray
[rank0]:     self._run_workers(
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 309, in _run_workers
[rank0]:     driver_worker_output = getattr(self.driver_worker,
[rank0]:                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/worker.py", line 125, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/model_runner.py", line 179, in load_model
[rank0]:     self.model = get_model(
[rank0]:                  ^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/loader.py", line 103, in get_model
[rank0]:     model.load_weights(model_config.model, model_config.download_dir,
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 515, in load_weights
[rank0]:     for name, loaded_weight in hf_model_weights_iterator(
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 318, in hf_model_weights_iterator
[rank0]:     for name, param in convert_gguf_to_state_dict(model_name_or_path,
[rank0]:                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 246, in convert_gguf_to_state_dict
[rank0]:     raise RuntimeError(f"Unknown model_type: {model_type}")
[rank0]: RuntimeError: Unknown model_type: mixtral
(RayWorkerAphrodite pid=267275) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs. [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(RayWorkerAphrodite pid=267275) INFO:     Using XFormers backend. [repeated 2x across cluster]
(RayWorkerAphrodite pid=267275) INFO:     Aphrodite is using nccl==2.20.5 [repeated 2x across cluster]
(RayWorkerAphrodite pid=267275) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped [repeated 2x across cluster]
(RayWorkerAphrodite pid=267275) WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning,  [repeated 2x across cluster]
(RayWorkerAphrodite pid=267275) specify disable_custom_all_reduce=True explicitly. [repeated 2x across cluster]
(RayWorkerAphrodite pid=267204) ERROR:    Error executing method load_model. This might cause deadlock in distributed execution. [repeated 2x across cluster]
(aphro-test) owen@owen-SERVER-0:~/aphrodite-engine$ python gguf_to_torch.py --input /home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --output /home/owen/models/Mixtral-8x7b_Q4KM --unquantized-path /home/owen/models/Mixtral-GGUF
python: can't open file '/home/owen/aphrodite-engine/gguf_to_torch.py': [Errno 2] No such file or directory
(aphro-test) owen@owen-SERVER-0:~/aphrodite-engine$ cd ..
(aphro-test) owen@owen-SERVER-0:~$ python gguf_to_torch.py --input /home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --output /home/owen/models/Mixtral-8x7b_Q4KM --unquantized-path /home/owen/models/Mixtral-GGUF
python: can't open file '/home/owen/gguf_to_torch.py': [Errno 2] No such file or directory
(aphro-test) owen@owen-SERVER-0:~$ cd aphrodite-engine/
(aphro-test) owen@owen-SERVER-0:~/aphrodite-engine$ cd examples/
(aphro-test) owen@owen-SERVER-0:~/aphrodite-engine/examples$ python gguf_to_torch.py --input /home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --output /home/owen/models/Mixtral-8x7b_Q4KM --unquantized-path /home/owen/models/Mixtral-GGUF
Traceback (most recent call last):
  File "/home/owen/aphrodite-engine/examples/gguf_to_torch.py", line 56, in <module>
    convert_save_model(args.input, args.unquantized_path, args.output,
  File "/home/owen/aphrodite-engine/examples/gguf_to_torch.py", line 18, in convert_save_model
    state_dict = convert_gguf_to_state_dict(checkpoint, config)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 246, in convert_gguf_to_state_dict
    raise RuntimeError(f"Unknown model_type: {model_type}")
RuntimeError: Unknown model_type: mixtral

Then trying to convert to pytorch using gguf_to_torch.py gives me this:

python gguf_to_torch.py --input /home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --output /home/owen/models/Mixtral-8x7b_Q4KM --unquantized-path /home/owen/models/Mixtral-GGUF
Traceback (most recent call last):
  File "/home/owen/aphrodite-engine/examples/gguf_to_torch.py", line 56, in <module>
    convert_save_model(args.input, args.unquantized_path, args.output,
  File "/home/owen/aphrodite-engine/examples/gguf_to_torch.py", line 18, in convert_save_model
    state_dict = convert_gguf_to_state_dict(checkpoint, config)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 246, in convert_gguf_to_state_dict
    raise RuntimeError(f"Unknown model_type: {model_type}")
RuntimeError: Unknown model_type: mixtral
sgsdxzy commented 2 months ago

have you checked out the branch? the line number indicates you are still using the old version. You need to do git checkout fix/mixtral-gguf in the repo first. And if you are not installing aphrodite with update-runtime.sh or pip install -e, you need to reinstall it.

Nero10578 commented 2 months ago

have you checked out the branch? the line number indicates you are still using the old version.

Whoops sorry I forgot to switch to use my new test conda env that I installed the branch in. Give me a sec.

Nero10578 commented 2 months ago

have you checked out the branch? the line number indicates you are still using the old version. You need to do git checkout fix/mixtral-gguf in the repo first. And if you are not installing aphrodite with update-runtime.sh or pip install -e, you need to reinstall it.

Ok so now converting works just fine, but running it directly still produces this error:

python -m aphrodite.endpoints.openai.api_server --model /home/owen/models/mixtral-8x7b-instruct-v0.1.Q3_K_M.gguf --dtype=float16 --gpu-memory-utilization 0.8 --max-model-len 1024 --port 8000 --kv-cache-dtype fp8 --served-model-name Mixtral-8x7B-Instruct --max-num-seqs 4 --disable-log-requests --max-log-len 0 --enforce-eager true --tensor-parallel 4
INFO:     Extracting config from GGUF...
WARNING:  gguf quantization is not fully optimized yet. The speed can be slower than non-quantized models.
WARNING:  Not found nvcc in /usr/local/cuda. Skip cuda version check!
INFO:     Using fp8 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance. But it may 
cause slight accuracy drop without scaling factors. FP8_E5M2 (without scaling) is only supported on cuda version greater than 
11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for common inference criteria.
2024-05-29 09:25:23,935 INFO worker.py:1749 -- Started a local Ray instance.
INFO:     Initializing the Aphrodite Engine (v0.5.3) with the following config:
INFO:     Model = '/home/owen/models/mixtral-8x7b-instruct-v0.1.Q3_K_M.gguf'
INFO:     Speculative Config = None
INFO:     DataType = torch.float16
INFO:     Model Load Format = auto
INFO:     Number of GPUs = 4
INFO:     Disable Custom All-Reduce = False
INFO:     Quantization Format = gguf
INFO:     Context Length = 1024
INFO:     Enforce Eager Mode = True
INFO:     KV Cache Data Type = fp8
INFO:     KV Cache Params Path = None
INFO:     Device = cuda
INFO:     Guided Decoding Backend = DecodingConfig(guided_decoding_backend='outlines')
INFO:     Converting tokenizer from GGUF...
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
INFO:     Using XFormers backend.
(RayWorkerAphrodite pid=283491) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
(RayWorkerAphrodite pid=283491) INFO:     Using XFormers backend.
INFO:     Aphrodite is using nccl==2.20.5
(RayWorkerAphrodite pid=283371) INFO:     Aphrodite is using nccl==2.20.5
INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, 
specify disable_custom_all_reduce=True explicitly.
(RayWorkerAphrodite pid=283371) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
(RayWorkerAphrodite pid=283371) WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, 
(RayWorkerAphrodite pid=283371) specify disable_custom_all_reduce=True explicitly.
Converting GGUF tensors to PyTorch... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 995/995 0:00:00
INFO:     Model weights loaded. Memory usage: 4.81 GiB x 4 = 19.25 GiB
(RayWorkerAphrodite pid=283371) Converting GGUF tensors to PyTorch... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 995/995 0:00:00
(RayWorkerAphrodite pid=283442) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs. [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(RayWorkerAphrodite pid=283442) INFO:     Using XFormers backend. [repeated 2x across cluster]
(RayWorkerAphrodite pid=283491) INFO:     Aphrodite is using nccl==2.20.5 [repeated 2x across cluster]
(RayWorkerAphrodite pid=283491) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped [repeated 2x across cluster]
(RayWorkerAphrodite pid=283491) WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning,  [repeated 2x across cluster]
(RayWorkerAphrodite pid=283491) specify disable_custom_all_reduce=True explicitly. [repeated 2x across cluster]
(RayWorkerAphrodite pid=283371) INFO:     Model weights loaded. Memory usage: 4.81 GiB x 4 = 19.25 GiB
*** SIGSEGV received at time=1716999949 on cpu 12 ***
    @     0x750b10a42520  556627264  (unknown)
    @ ... and at least 4 more frames
[2024-05-29 09:25:49,702 E 282275 282275] logging.cc:365: *** SIGSEGV received at time=1716999949 on cpu 12 ***
[2024-05-29 09:25:49,702 E 282275 282275] logging.cc:365:     @     0x750b10a42520  556627264  (unknown)
[2024-05-29 09:25:49,702 E 282275 282275] logging.cc:365:     @ ... and at least 4 more frames
Fatal Python error: Segmentation fault

Stack (most recent call first):
  File "/home/owen/aphrodite-engine/aphrodite/quantization/gguf.py", line 141 in apply_weights
  File "/home/owen/aphrodite-engine/aphrodite/modeling/layers/linear.py", line 239 in forward
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541 in _call_impl  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532 in _wrapped_call_impl
  File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 169 in forward
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541 in _call_impl  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532 in _wrapped_call_impl
  File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 368 in forward
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541 in _call_impl  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532 in _wrapped_call_impl
  File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 410 in forward
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541 in _call_impl  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532 in _wrapped_call_impl
  File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 475 in forward
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541 in _call_impl  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532 in _wrapped_call_impl
  File "/home/owen/aphrodite-engine/aphrodite/task_handler/model_runner.py", line 868 in execute_model
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115 in decorate_context
  File "/home/owen/aphrodite-engine/aphrodite/task_handler/model_runner.py", line 948 in profile_run
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115 in decorate_context
  File "/home/owen/aphrodite-engine/aphrodite/task_handler/worker.py", line 144 in determine_num_available_blocks
  File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115 in decorate_context
  File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 309 in _run_workers
  File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 208 in determine_num_available_blocks
  File "/home/owen/aphrodite-engine/aphrodite/engine/aphrodite_engine.py", line 182 in _initialize_kv_caches
  File "/home/owen/aphrodite-engine/aphrodite/engine/aphrodite_engine.py", line 142 in __init__
  File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 429 in _init_engine
  File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 323 in __init__
  File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 358 in from_engine_args
  File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 519 in run_server
  File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 562 in <module>
  File "<frozen runpy>", line 88 in _run_code
  File "<frozen runpy>", line 198 in _run_module_as_main

Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, yaml._yaml, sentencepiece._sentencepiece, psutil._psutil_linux, psutil._psutil_posix, scipy._lib._ccallback_c, msgpack._cmsgpack, google._upb._message, setproctitle, uvloop.loop, ray._raylet, ujson, regex._regex, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, numba.experimental.jitclass._box, markupsafe._speedups, PIL._imaging, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._cdflib, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.optimize._direct (total: 102)
Segmentation fault (core dumped)

It also seems to keep OOM even thought I should have more than enough VRAM. It works fine for other models of similar size so I don't understand why this is OOM-ing. I can run Yi-1.5-34B-Chat-16K GGUF Q6KM just fine for example.

python -m aphrodite.endpoints.openai.api_server --model /home/owen/models/Mixtral-8x7b_Q3KM --dtype=float16 --gpu-memory-utilization 0.8 --max-model-len 1024 --port 8000 --kv-cache-dtype fp8 --served-model-name Mixtral-8x7B-Instruct --max-num-seqs 4 --disable-log-requests --max-log-len 0 --enforce-eager true --tensor-parallel 4
WARNING:  Not found nvcc in /usr/local/cuda. Skip cuda version check!
INFO:     Using fp8 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance. But it may 
cause slight accuracy drop without scaling factors. FP8_E5M2 (without scaling) is only supported on cuda version greater than 
11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for common inference criteria.
2024-05-29 09:27:02,860 INFO worker.py:1749 -- Started a local Ray instance.
INFO:     Initializing the Aphrodite Engine (v0.5.3) with the following config:
INFO:     Model = '/home/owen/models/Mixtral-8x7b_Q3KM'
INFO:     Speculative Config = None
INFO:     DataType = torch.float16
INFO:     Model Load Format = auto
INFO:     Number of GPUs = 4
INFO:     Disable Custom All-Reduce = False
INFO:     Quantization Format = None
INFO:     Context Length = 1024
INFO:     Enforce Eager Mode = True
INFO:     KV Cache Data Type = fp8
INFO:     KV Cache Params Path = None
INFO:     Device = cuda
INFO:     Guided Decoding Backend = DecodingConfig(guided_decoding_backend='outlines')
INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
INFO:     Using XFormers backend.
(RayWorkerAphrodite pid=284887) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
(RayWorkerAphrodite pid=284887) INFO:     Using XFormers backend.
INFO:     Aphrodite is using nccl==2.20.5
(RayWorkerAphrodite pid=284887) INFO:     Aphrodite is using nccl==2.20.5
INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, 
specify disable_custom_all_reduce=True explicitly.
(RayWorkerAphrodite pid=284887) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
(RayWorkerAphrodite pid=284887) WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, 
(RayWorkerAphrodite pid=284887) specify disable_custom_all_reduce=True explicitly.
[rank0]: Traceback (most recent call last):
[rank0]:   File "<frozen runpy>", line 198, in _run_module_as_main
[rank0]:   File "<frozen runpy>", line 88, in _run_code
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 562, in <module>
[rank0]:     run_server(args)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 519, in run_server
[rank0]:     engine = AsyncAphrodite.from_engine_args(engine_args)
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 358, in from_engine_args
[rank0]:     engine = cls(engine_config.parallel_config.worker_use_ray,
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 323, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 429, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/aphrodite_engine.py", line 131, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:                           ^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/executor_base.py", line 39, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 45, in _init_executor
[rank0]:     self._init_workers_ray(placement_group)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 193, in _init_workers_ray
[rank0]:     self._run_workers(
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 309, in _run_workers
[rank0]:     driver_worker_output = getattr(self.driver_worker,
[rank0]:                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/worker.py", line 125, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/model_runner.py", line 179, in load_model
[rank0]:     self.model = get_model(
[rank0]:                  ^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/loader.py", line 83, in get_model
[rank0]:     model = model_class(model_config.hf_config, linear_method,
[rank0]:             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 448, in __init__
[rank0]:     self.model = MixtralModel(config,
[rank0]:                  ^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 393, in __init__
[rank0]:     self.layers = nn.ModuleList([
[rank0]:                                 ^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 394, in <listcomp>
[rank0]:     MixtralDecoderLayer(config, linear_method=linear_method)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 332, in __init__
[rank0]:     self.block_sparse_moe = MixtralMoE(
[rank0]:                             ^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 154, in __init__
[rank0]:     self.ws = MergedColumnParallelLinear(hidden_size,
[rank0]:               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/layers/linear.py", line 395, in __init__
[rank0]:     super().__init__(input_size, sum(output_sizes), bias, gather_output,
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/layers/linear.py", line 298, in __init__
[rank0]:     self.linear_weights = self.linear_method.create_moe_weights(
[rank0]:                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/layers/linear.py", line 127, in create_moe_weights
[rank0]:     new_param = Parameter(param.unsqueeze(0).repeat(*repeat_size),
[rank0]:                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/miniconda3/envs/aphro-test/lib/python3.11/site-packages/torch/utils/_device.py", line 78, in __torch_function__
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 448.00 MiB. GPU 
(RayWorkerAphrodite pid=284887) ERROR:    Error executing method load_model. This might cause deadlock in distributed execution.
(RayWorkerAphrodite pid=285009) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs. [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(RayWorkerAphrodite pid=285009) INFO:     Using XFormers backend. [repeated 2x across cluster]
(RayWorkerAphrodite pid=285009) INFO:     Aphrodite is using nccl==2.20.5 [repeated 2x across cluster]
(RayWorkerAphrodite pid=285009) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped [repeated 2x across cluster]
(RayWorkerAphrodite pid=285009) WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning,  [repeated 2x across cluster]
(RayWorkerAphrodite pid=285009) specify disable_custom_all_reduce=True explicitly. [repeated 2x across cluster]
(RayWorkerAphrodite pid=285009) ERROR:    Error executing method load_model. This might cause deadlock in distributed execution. [repeated 2x across cluster]
sgsdxzy commented 2 months ago

I didn't notice abnormal vram usage when running https://huggingface.co/venketh/Mixtral-8x7B-v0.1-GGUF-imatrix/blob/main/mixtral-8x7b-v0.1.IQ2_XXS.gguf @AlpinDale any idea?

Nero10578 commented 2 months ago

I didn't notice abnormal vram usage when running https://huggingface.co/venketh/Mixtral-8x7B-v0.1-GGUF-imatrix/blob/main/mixtral-8x7b-v0.1.IQ2_XXS.gguf @AlpinDale any idea?

Does this have something to do with running GGUF with only Xformers due to my use of Pascal GPUs?

But then again I can run dense models with similar total VRAM use just fine.

So is this because aphrodite loads MoE models differently to dense models?