SJTU-IPADS / PowerInfer

High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
MIT License
7.96k stars 412 forks source link

possible to do one that can fit into 7GB vram? #141

Open sprappcom opened 9 months ago

sprappcom commented 9 months ago

7b is > 12gb ram use, can u do one that is maybe 3b parameters or have one 7b whose quantisation is 4_0 gguf or something?

hodlen commented 9 months ago

To better assist you, could you please clarify the context? For example, what's the hardware spec and the model you want to use?

In general, PowerInfer is designed to automatically offload model weights to VRAM to utilize GPU as possible. If you're looking to further restrict VRAM usage, you might consider using the --vram-budget parameter to specify your VRAM limitations. You can refer to our inference README for some examples.

sprappcom commented 9 months ago

it's not obvious on the "speed up", the quality generated is not ideal at this stage.

maybe i'll wait for mistral 7b. hope to see this mainstream.

p.s. : i'm using 4060 laptop with 7gb vram for testing. it has 8gb but 1gb seemed reserved for display use.