Closed tadad closed 10 months ago
Implementing Send for *mut c_void
is foot-shot, AFAIKT. Well, you can see this commit. Main problem is in moving value from one position to another when you are prompting llama. It causes, for me, several redis errors (I'm using LLama in several fullstack apps). That's why you should not request this feature, in my opinion.
But! You can solve your problem by this implementation:
use llama_cpp_rs::{LLama, options::{ModelOptions, PredictOptions}};
pub fn start_llama_thread(
llama_channel_rx: mpsc::Receiver<(String, mpsc::Sender<String>)>,
) {
thread::spawn(move || {
let llama = LLama::new(
"../models/zephyr-7b-beta.gguf".into(),
&ModelOptions {
context_size: 2048,
..Default::default()
}
).unwrap();
while let Ok((user_msg, response_tx)) = llama_channel_rx.recv() {
let predict_options = PredictOptions {
threads: 14,
temperature: 0.7,
penalty: 1.1,
..Default::default()
};
match llama.predict(
format!("<|system|>\n</s>\n<|user|>\n{}</s>\n<|assistant|>", user_msg).into(),
predict_options,
) {
Ok(result) => if let Err(e) = response_tx.send(result) { log::warn!("{}", e) },
Err(e) => { log::warn!("{}", e) },
}
}
});
}
and
let (response_tx, response_rx) = mpsc::channel::<String>();
if let Err(_) = llama_channel_tx.send(("This is my own request to Zephyr! Hello! How are you?".into(), response_tx)) {
return Err("Cannot ask the model.".into())
}
log::info!("Sent a request to Zephyr-7b-β model.");
match response_rx.recv() {
Err(_) => Err("Cannot receive answer from model.".into()),
Ok(result) => Ok(result),
}
I think with the code you provided llama_channel_rx.recv()
should be llama_channel_rx.recv().await
which causes the issues with Send
. The other thing I was looking into (haven't gotten to work yet) is tokio::task::LocalSet
which lets you run !Send
things. Still unsure if that's the right move because the main thing I'm trying to avoid is the startup time with LLama::new()
- I just want to instantiate it once on boot and then call it throughout the run of my program with an mpsc
channel.
Edit: I tried the commit with the unsafe impl of Send
and it works! I'm starting to think that in my specific context I won't run into any issues with the unsafe
ness. Since every prompt to the llama is being passed in via an mpsc
, the single consumer guarantees that all prompts are processed serially which is what we want. Am I right about that?
Edit 2: every couple queries I get seg faults and other memory errors like this: Incorrect checksum for freed object 0x7ff00f505bf0: probably modified after being freed. Corrupt value: 0x6e61206e616d2061
I think with the code you provided
llama_channel_rx.recv()
should bellama_channel_rx.recv().await
It's should not, because you have to use std::sync::mpsc
, not tokio::sync::mpsc
. That's why it uses std::thread::spawn
, not tokio::task::spawn
.
UPD: One more time: "LLama is not Send" means "LLama should run only in single thread, without async context switch - without any async code at total". In standard thread it's guaranteed to be safe at Rust guards.
ahh my bad. This is working for me now, thanks!
I'm in a context where I have to instantiate a
LLama
instance once and then call it across threads. Within the compiler errors I see this which is probably of use:Is there any way to make
LLama
thread-safe? Or maybe some way to accomplish more or less the same thing where one model is being called to generate text from multiple threads?