lmstudio-ai / lmstudio.js

LM Studio TypeScript SDK (pre-release public alpha)
https://lmstudio.ai/docs/lmstudio-sdk/quick-start
Apache License 2.0
271 stars 42 forks source link

How to switch/choose different quantization version from a model path #23

Closed emiyalee1005 closed 1 month ago

emiyalee1005 commented 1 month ago

From document I see something: // Matches any quantization const llama3 = await client.llm.get({ path: "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF" });

Dose it mean to load all quantization version into memory? Or there is method to specify which version to actually load?

Trippnology commented 1 month ago

You can load a specific file by adding the full path:

const llama3 = await client.llm.load('lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct-Q8_0.gguf');
emiyalee1005 commented 1 month ago

You can load a specific file by adding the full path:

const llama3 = await client.llm.load('lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct-Q8_0.gguf');

Thanks, it help