ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.77k stars 9.28k forks source link

Feature Request: multiple queues or multiple threads to load model files. #8796

Open rankaiyx opened 1 month ago

rankaiyx commented 1 month ago

Prerequisites

Feature Description

For the current version of the program, the model file is loaded with a single queue, single thread. this way may not realize their full performance potential on some types of disk drives, like some nvme SSD drive.

Can we implement multiple queues or multiple threads to load model files?

Motivation

As model files become larger and larger, making full use of hardware performance saves a lot of model file loading time.

The following are the test results on an nvme SSD drive.

$ sudo sh -c "/usr/bin/echo 3 > /proc/sys/vm/drop_caches" $ dd if=1.8T/gguf/Big-Tiger-Gemma-27B-v1-IQ4_XS.gguf of=/dev/null bs=1M count=5000 5242880000 bytes (5.2 GB, 4.9 GiB) copied, 3.39833 s, 1.5 GB/s

$ sudo sh -c "/usr/bin/echo 3 > /proc/sys/vm/drop_caches" $ dd if=1.8T/gguf/Big-Tiger-Gemma-27B-v1-IQ4_XS.gguf of=/dev/null bs=1M count=5000 & dd if=1.8T/gguf/Big-Tiger-Gemma-27B-v1-IQ4_XS.gguf of=/dev/null bs=1M count=5000 skip=5000 5242880000 bytes (5.2 GB, 4.9 GiB) copied, 4.19029 s, 1.3 GB/s 5242880000 bytes (5.2 GB, 4.9 GiB) copied, 4.19301 s, 1.3 GB/s

total: 2.6 GB/s

$ sudo sh -c "/usr/bin/echo 3 > /proc/sys/vm/drop_caches" $ dd if=1.8T/gguf/Big-Tiger-Gemma-27B-v1-IQ4_XS.gguf of=/dev/null bs=1M count=5000 & dd if=1.8T/gguf/Big-Tiger-Gemma-27B-v1-IQ4_XS.gguf of=/dev/null bs=1M count=5000 skip=5000 & dd if=1.8T/gguf/Big-Tiger-Gemma-27B-v1-IQ4_XS.gguf of=/dev/null bs=1M skip=10000

4328660512 bytes (4.3 GB, 4.0 GiB) copied, 4.30938 s, 1.0 GB/s 5242880000 bytes (5.2 GB, 4.9 GiB) copied, 5.04395 s, 1.0 GB/s 5242880000 bytes (5.2 GB, 4.9 GiB) copied, 5.04875 s, 1.0 GB/s

total: 3.0 GB/s

Possible Implementation

multiple queues or multiple threads to load model files.

rankaiyx commented 1 month ago

I found a temporary solution.

On the nvme device, create four equal partitions, and then use mdadm to create them as a single raid0 device.

Through this method, I achieved the expected model loading speed (2.9GB/s).