zenoverflow / omnichain

Efficient visual programming for AI language models
https://omnichain.zenoverflow.com
MIT License
297 stars 24 forks source link

Add Support for LM Studio Server Or Add Native Support for llama.cpp #7

Closed Iory1998 closed 4 months ago

Iory1998 commented 4 months ago

Hi,

Thank you very much for this promising project.

I would like to learn and use Omnichain because it's something I've been advocating for for quite sometime now (you can refer to this post https://www.reddit.com/r/LocalLLaMA/comments/1d266pa/comfyui_for_llms_making_the_case_for_a_universal/)

However, I don't like using Ollama. Ideally, Omnichain should come with at least one native LLM backend like llama.cpp so it doesn't become dependent on other backends. I really a one-stop app that can do everything.

But I understand that you are working on integrating backends natively, so in the meantime, could you please add support for LM Studio?

Thank you very much!

Iory

zenoverflow commented 4 months ago

You can use LMStudio right now via the OpenAI nodes if you change the field for the url (near the bottom). But afaik LMStudio has more settings than vanilla OpenAI, so LMStudio nodes (and oobabooga nodes) are coming either way, probably at the beginning of next month.

Native loaders are on a checklist for the next major update, but I need to figure out a clean way to integrate them while keeping the setup simple. I'm looking either for a new job or an investment right now, so I haven't had time to deal with that but it's definitely in the works.

Iory1998 commented 4 months ago

Well, in that case, it's not obvious for me how. Could you please create a chain preset that can we directly use or build upon?

I wish a good luck in your endeavor to find a job. I hope this platform you are working on would be a testament to your talent. I think this project has a lot of value for companies. For instance, some companies might get tens or even hundreds of emails everyday, and some can be leads for them. Going through each email, classify it, and then refer it to the right salesperson is a tedious task, that takes time and energy. However, such a task can be fully delegated to a LLM, which can read the emails, make a summary, can even rank the importance of it, and forward it to the right salesperson. It might respond to the client in the meantime.

I think Omnichain can achieve this, and I'd hope you can create a preset for this. This is where I believe Omnichain should go: solve actual problems that can improve the workplace for many businesses. Imagine Omnichain connected to a database, and using the power of multi-agents, database can be analyzed and useful data can be easily extracted, and written into a summary or a report. That is added value for many businesses.

I like LM Studio because it has a clean UI design. I use it to write stories and report in it on a daily basis. But, it doesn't have multi-agents capabilities. "- Multi-Agent Workflow: Picture a small model like Phi-3-Small scouring the internet for information, another model summarizing it, and a third organizing it into a polished format. Each model is connected to a unique system prompt node, creating a seamless, multi-agent workflow."

zenoverflow commented 4 months ago

Well, in that case, it's not obvious for me how.

It's really simple, you literally just take either of the two OpenAI completion nodes, like this one:

custom_baseurl_1

And make sure your base_url is set to whatever url LMStudio gives you. In this case I've set it to ooba's API:

custom_baseurl_2

You can do it like this, or you can use the more complex "call anything" example (import examples/chains/call_json_api.json) if you want those custom sampler parameters before I'm done with the official ooba and LMStudio nodes.

Iory1998 commented 4 months ago

Thank you for your reply. Well, I tried to follow your first video tutorial, and when I try to hook the OpenAITextCompletion to the build message node, it doesn't hook.

zenoverflow commented 4 months ago

You probably didn't realize that thing outputs an array of strings, cause OpenAI supports multiple generations. You can get the first item like this:

Screenshot_20240624_204102

zenoverflow commented 4 months ago

When in doubt, always look at the colors of the wires, also look at the full doc for both nodes in the node index (book icon in the top bar). In this case, blue is an array of strings, and green is a single string - you need a single string for the content input on the BuildMessage node.

Iory1998 commented 4 months ago

I'll that and keep learning the platform. Kindly, incorporate many examples in the platform like in your videos so people can get inspired.

zenoverflow commented 4 months ago

Added nodes for LMStudio. Native loaders like llama.cpp are still postponed until a major update since they're going to require a ton of work in keeping the project cross-platform and easy to install - not an easy task when you're not a systems-level dev. Nothing else to do for now, so I'm closing this issue.