Closed emiyalee1005 closed 1 month ago
You can load a specific file by adding the full path:
const llama3 = await client.llm.load('lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct-Q8_0.gguf');
You can load a specific file by adding the full path:
const llama3 = await client.llm.load('lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct-Q8_0.gguf');
Thanks, it help
From document I see something:
// Matches any quantization const llama3 = await client.llm.get({ path: "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF" });
Dose it mean to load all quantization version into memory? Or there is method to specify which version to actually load?