Open qarol opened 1 month ago
@qarol Thank you for your PR. You'll need to upmerge it with main
.
Use the LLM adapter in the Assistant instance whenever possible, instead of calling LLM directly
Would you please explain what the benefits is this are?
Hi @andreibondarev, I think the Assistant needs a wrapped LLM object with a consistent interface that can handle all necessary actions. I'm not sure if the Assistant should know that some actions can be performed directly on the LLM object, while others require its adapter. I think we can rely solely on the adapter. IMO these changes could bring us closer to a setup where the Assistant depends on only one object, not two - potentially simplifying custom client injection.
In the next step, I was thinking of removing all the if's from the Assistant that determine the type of LLM object. Instead, we could move that logic into dedicated adapters, with each knowing how to handle things (what to do if instruction changes?).
I think it would be great to make the Assistant compatible with any LLM, not just the ones available in that gem.
Hey @andreibondarev, I've changed a little an approach by allowing to inject custom LLM adapter when Assistant is initialized. That brings more flexibility to use it with totally custom LLM which is not yet covered by this gem. I think it can bring some value and also doesn't introduces any breaking changes. Let me know what you think :)
@qarol Do you have a use-case where you're trying to hook in an LLM, that we don't yet support, working with the Assistant?
Hey @andreibondarev, I’m required to use a closed wrapper built on top of multiple LLMs. It includes some policies to prevent data leaks, etc. The abstraction that the Assistant provides is something I’d like to use without adding any monkey patches to the code. I hope this clarifies the goal a bit.
@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?
@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?
Sorry for the delay! I'm going to take a careful look at this tomorrow!
@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?
So in order to make it work you'd need to pass both custom llm:
and llm_adapter:
, right? Anyways -- I'm all for it, I'd just like to figure out an elegant DSL for it.
@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?
So in order to make it work you'd need to pass both custom
llm:
andllm_adapter:
, right? Anyways -- I'm all for it, I'd just like to figure out an elegant DSL for it.
Yes, and agree that's not fully elegant yet.
Ideally, I would like to pass just one object that exposes an interface with all the required by assistant methods to run. That's why I tried to wrap llm
with adapter and use only that object within assistant. No calls to llm
directly. But this approach would bring at least one breaking change unpleasent to have:
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
adapter = Langchain::Assistant::LLM::Adapters::OpenAI.new(llm: llm)
assistant = Langchain::Assistant.new(llm: adapter)
You would have to provide a wrapped llm instead of pure llm to assistant.
Different approach might be introducing dedicated factory for registered LLM:
module Langchain
class Assistant
def self.for(llm:)
new(llm: LLM::Adapter.build(llm: llm))
end
def initialize(llm:)
raise ArgumentError unless llm.is_a?(LLM::Adapters::Base)
# do something
end
end
end
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
assistant = Langchain::Assistant.for(llm: llm)
# or
llm = CustomLLM.new(api_key: ENV["CUSTOM_LLM_KEY"])
assistant = Langchain::Assistant.new(llm: llm)
Or it can be simplified to something like this (forgive me pseudocode):
module Langchain
class Assistant
def initialize(llm:)
case llm
when Langchain::LLM::Base
@llm = Langchain::Assistant::LLM::Adapter.build(llm)
when Langchain::Assistant::LLM::Adapters::Base
@llm = llm
else
raise "Invalid LLM"
end
end
# Use only @llm everywhere
end
end
llm = CustomLLM.new(api_key: ENV["CUSTOM_LLM_KEY"])
assistant = Langchain::Assistant.new(llm: llm)
I think the last option would be the simplest 🤔
@andreibondarev I've implemented the latest concept. WDYT?
@andreibondarev I've implemented the latest concept. WDYT?
@qarol Hmmm... these are pretty big changes. How much flexibility do you have to override and monkey-patch classes in your own application?
Could you overwrite this class and the build method to whatever you want? https://github.com/patterns-ai-core/langchainrb/blob/main/lib/langchain/assistant/llm/adapter.rb.
Also -- I think what's going to happen at some point is that these types of classes (e.g.: Langchain::LLM::OpenAI
and Langchain::Assistant::LLM::Adapters::OpenAI
) are going to be combined because there's overlap between them.
@andreibondarev thanks for your feedback. Opening an adapter's factory is an option too. I have something like that:
module Langchain
class Assistant
module LLM
class Adapter
class << self
prepend(Module.new {
def build(llm)
if llm.is_a?(CustomLLM)
CustomAdapter.new
else
super
end
end
})
end
end
end
end
end
That builder can become also an AdapterRegistry or something. If that's to big change, let's simplify it a little ;)