CloudLLM-ai / cloudllm

CloudLLM is a Rust library designed to seamlessly bridge applications with remote Language Learning Models (LLMs) across various platforms.
MIT License
5 stars 1 forks source link

Create Abstractions for LLM Clients to talk to one another #4

Open gubatron opened 3 months ago

gubatron commented 3 months ago

I believe it'd be very useful to create abstraction to create communication between 2 different CloudLLM clients.

Currently we have an LLMSession which is powered by a single client:

    // Instantiate the OpenAI client
    let client = OpenAIClient::new(&secret_key, "gpt-4o");

    // Set up the LLMSession
    let system_prompt = "You are an award winning bitcoin/blockchain/crypto/tech/software journalist for DiarioBitcoin, you are spanish/english bilingual, you can write in spanish at a professional journalist level, as well as a software engineer. You are hold a doctorate in economy and cryptography. When you answer you don't make any mentions of your credentials unless specifically asked about them.".to_string();
    let mut session = LLMSession::new(client, system_prompt);
    ...
    let response = session.send_message(Role::User, user_input.to_string()).await.unwrap();
    tx.send(false).unwrap();

    // Print the assistant's response
    println!("\nAssistant:\n{}\n", response.content);

Perhaps an LLMSession should have different participants under different roles or ranks.

You could have a 3-way conversation between yourself and two different clients, or set them up to talk to one another to solve a task or discuss amongst themselves, or you could host a panel where you or one of the clients is a moderator for the conversation and they all get to talk in a roundrobin fashion, while they all listen to the previous panelists opinions on a subject.

The goal would be to use different LLM Models to have the likes of an expert panel or advisory board at your disposal.

Something along the lines of:

session.add_participant(groq_llama_client, SessionParticipant::Moderator);
session.add_participant(claude_client, SessionParticipant::Panelist);
...
// then send_message would be responsible for sending and receiving responses to the LLMs
// until finally collecting everything back to us, or perhaps send responses as soon as 
// they're available,  and the response object could have an indicator to let us know 
// if it's still waiting from participants, even though it may have some responses down 
// the pipe already

let response = session.send_message(Role::User, user_input.to_string()).await.unwrap();