I've added some new options to the llama.cpp, but can not pass to the provider, The all providers can not pass custom parameters too.
import { llamacpp, generateText } from "modelfusion";
const text = await generateText({
model: llamacpp
.CompletionTextGenerator({
maxGenerationTokens: 2048,
my_custom_param: 'can not pass',
model: 'mistral-7b-openorca.Q8_0.gguf',
})
.withTextPrompt(), // use simple text prompt style
prompt: "Write a short story about a robot learning to love.",
});
I've added some new options to the llama.cpp, but can not pass to the provider, The all providers can not pass custom parameters too.