Open sprappcom opened 9 months ago
To better assist you, could you please clarify the context? For example, what's the hardware spec and the model you want to use?
In general, PowerInfer is designed to automatically offload model weights to VRAM to utilize GPU as possible. If you're looking to further restrict VRAM usage, you might consider using the --vram-budget
parameter to specify your VRAM limitations. You can refer to our inference README for some examples.
it's not obvious on the "speed up", the quality generated is not ideal at this stage.
maybe i'll wait for mistral 7b. hope to see this mainstream.
p.s. : i'm using 4060 laptop with 7gb vram for testing. it has 8gb but 1gb seemed reserved for display use.
7b is > 12gb ram use, can u do one that is maybe 3b parameters or have one 7b whose quantisation is 4_0 gguf or something?