eidolon-ai / eidolon

The first AI Agent Server, Eidolon is a pluggable Agent SDK and enterprise ready, deployment server for Agentic applications
https://www.eidolonai.com/
Apache License 2.0
286 stars 31 forks source link

It’s me again. Your docs and errors are still meh. QuickStart still doesn’t work. #847

Closed atljoseph closed 1 month ago

atljoseph commented 1 month ago

Hello world in the QuickStart. OllamaLLMUnit implementation.

  1. Ok where does my Ollama server URL go? The page about the Ollama implementation in the docs does not mention url whatsoever. Client_options is listed as an object. No further documentation on expected fields. Why should someone have to go to the code to figure out that configuration? There is a whole page for it in the documentation, and a whole spot in which the way to define the URL should live…

  2. “ok maybe it defaults to localhost:11434. I’ll give them another chance. Ollama is running there”. I put in a random model “Nemo:4b” in as the model.name, and “forget this overcomplicated thing” as model.human_name. After restarting, there is a big ugly fatal error that mentions ‘llama’ is not a fully qualified class name. That’s confusing, because the word ‘llama’ doesn’t appear in the config. It’s a configuration issue, so why give this qualified class name junk as an error? Why not suggest the correct config? I’m a prospective casual non-python user. There are docs. Why don’t the docs have info about this in a digestible way?

  3. Still didn’t work with OpenAI. Env var is set. Says 404 can find the process. Well, the frontend is literally sending undefined for the process ID. Stock, off the shelf main branch. SMH.

This has been the 3rd or 4th attempt in the past 6 months. I know y’all spent a lot of time and energy making this. Looks promising… can’t get it to work. I asked questions on your discord, and eventually left the server because yall never responded. Is this feedback appreciated? Are you gonna overhaul docs? Am I wasting my time? Perhaps, I’m a bit disappointed and hoping that someone will clear up the misunderstanding I have about this product. It’d help you get another follower, and it’d help me to quit counting the number of hours I’ve spent on this as wasted time.

flynntsang commented 1 month ago

I'm so sorry. I've been brought on to help with this specifically, but two significant family concerns have me traveling and attending to matters.

I would very much like to help you. I will be back on Tuesday. Please hang tight.

Was the Quickstart successful for any other LLM provider? I understand that Ollama is self-hosted. It should have worked with OpenAI. We tested that a whole bunch of times, but it's possible we changed something.

@abhi-vachani or @LukeLalor any ideas on why process is undefined (# 3)?

LukeLalor commented 1 month ago

@atljoseph thanks for helping us fight through the pain so we can make this a better experience. We do appreciate the feedback.

1: How are you running the server?

Are you using make docker-serve? If not I would recommend that. Running via python process is helpful for building custom tools or agent-templates, but isn't helpful when getting started or when defining agents via yaml files.

2: Did clone a new repo or are you still working from the old one?

We swapped out the quickstart repo with a new one to prevent fork-of-fork weirdness in GitHub. This might cause weird behavior if you are working off a local clone or fork and just pulled rather than re-cloning the quickstart.

3: What version of the SDK are you using?

We are in the midst of spending another cycle on our errors, but we cleaned up some of the llm errors a few versions ago. I do not think ollama was part of this yet unfortunately,

cat pyproject.toml | grep eidolon-ai-sdk
eidolon-ai-sdk = "^0.1.142"

if this particularly outdated, # 2 is likely an issue.

4: Could you check what version of the webui you are using and update it?

docker image ls | grep eidolonai/webui
eidolonai/webui                 latest    2b9a2489c5cf   3 days ago       350MB

I think this is likely out of date since you were working on the project a while back. I just checked our makefile target, and we don't grab the latest version of the webui when running via docker-serve. We aren't pinning this to a version in the example repos, but we should. I'll file a ticket.

docker pull eidolonai/webui:latest

TL:DR

I think the repo, sdk, or webui image is out of date (likely the last).

I spent some time trying to reproduce on latest main, but was unable to. I know before we revamped the dev tool we had that issue when creating a conversation if the server was not running (or the agent did not exist). If this is the case, pulling a new webui or starting a new conversation should fix the problem.

If the above does not help, could you upload your server / ui logs?

docker compose logs > combined_logs.txt

I have a standing spot blocked out on Tuesday @ 10am PST for office hours if you to debug in person. We can of course also schedule something on the fly. Messaging us in discord on the dev channel is the best way to schedule an ad-hoc call.

atljoseph commented 1 month ago

I got it working. Had already pulled latest from the git branch, but there were outdated docker artifacts, yes.

It works reasonably well, now, albeit, the docs could have more digestible info. Diving into Python code and finding what you need has been passed as kwargs is just chasing tail, for someone who is not a python whiz. Proper docs ensures adoption.

Initially, it would connect to chatGPT despite the config i gave it.

This worked (through much trial and error):

atljoseph commented 1 month ago

if I needed multiple ollama host environment variables, how would I declare those? how would I make an agent system where one agent is on one Ollama server and another is on another Ollama server?

flynntsang commented 1 month ago

Hi @atljoseph I'll try my best and rely on the more technical folks to fill in what I miss. I don't have the software running right now and I confess I haven't tested this. So forgive me if there's a typo or error. But I'm pretty sure I can get you in the right direction.

One of the problems you're running into is the same that I had when I started: it wasn't clear how to edit the YAML files correctly. This is being addressed through issue #740 .

I would simplify things by creating one agent and make sure you're getting gemma2. Then you can create new APU references, and introduce them into the agent as in this multi-chatbot example.

Save all files to your /resources directory!

apiVersion: server.eidolonai.com/v1alpha1
kind: Agent
metadata:
  name: jeff_bridges
spec:
  description: "A conversational agent for dudes."
  system_prompt: |
    You can answer any question, but speak as if you are Jeff Bridges in The Dude. 
    Use catch phrases whenever you can, such as: 
    "Like, hey Man, that rug really tied the room together!" and
    "The Dude abides."
  title_generation_mode: auto
  apu: # from https://www.eidolonai.com/docs/components/apu/overview
    implementation: llamma3-8b
    llm_unit: # from https://www.eidolonai.com/docs/components/apu/llamma3-8b#6-property-llm_unit
      implementation: OllamaLLMUnit
        model: 
          name: gemma2 # from https://ollama.com/library?sort=featured
          human_name: "Gemma 2 9B"

Once you have that going, you can create reusable APU references. Personally, I would create them as separate YAML files, but you can combine them in one if you like. Just separate them with ---.

apiVersion: server.eidolonai.com/v1alpha1
kind: Reference
metadata:
  name: Gemma2
  annotations:
    - title: "Google Gemma2 9B"
spec:
  implementation: ConversationalAPU
  llm_unit:
    implementation: ToolCallLLMWrapper
    llm_unit:
      implementation: OllamaLLMUnit
      model: "gemma2"

and another...

apiVersion: server.eidolonai.com/v1alpha1
kind: Reference
metadata:
  name: "Llama3.2"
  annotations:
    - title: "Llama 3.2 1B"
spec:
  implementation: ConversationalAPU
  llm_unit:
    implementation: ToolCallLLMWrapper
    llm_unit:
      implementation: OllamaLLMUnit
      model: "llama3.2:1b"

Then you can go back to your agent and replace the apu: section with apus:


apiVersion: server.eidolonai.com/v1alpha1
kind: Agent
metadata:
  name: jeff_bridges
spec:
  description: "A conversational agent for dudes."
  system_prompt: |
    You can answer any question, but speak as if you are Jeff Bridges in The Dude. 
    Use catch phrases whenever you can, such as: 
    "Like, hey Man, that rug really tied the room together!" and
    "The Dude abides."
  title_generation_mode: auto
  apus:
    - apu: Llamma3-8b # built-in
    - apu: Gemma2 # your custom reference
    - apu: Llama3.2 # your custom reference
atljoseph commented 1 month ago

Yeah got all that thanks

atljoseph commented 1 month ago

How to point one model at one Ollama host and another at another host?

flynntsang commented 1 month ago

Re:

Initially, it would connect to chatGPT despite the config i gave it.

OLLAMA_HOST. Why is that not plastered everywhere on the OllamaLLMUnit page????? OLLAMA_HOST=http://localhost-not-work-but-ip-did:11434/ Did not include /v1, but should I? Is that a thing?

atljoseph commented 1 month ago

Docker all the way.

The first config I tried today was wrong, but it was kinda difficult to tell. It’d be helpful to put examples in the docs in a more prominent way. Feature, example, APU, example, Provider, example. Everything with an example, as if everyone reading it is a 5th grader… sometimes that’s about all the attention they can give.

atljoseph commented 1 month ago

Do y’all have a collection of what all people have made with eidolon?

flynntsang commented 1 month ago

How to point one model at one Ollama host and another at another host?

That's a darn good question. The best I can think of (I do not know if this will work) is to use https://www.eidolonai.com/docs/components/llmunit/ollamallmunit#client_options but I would think there would be something more like "server options" or "host options".

apiVersion: server.eidolonai.com/v1alpha1
kind: Reference
metadata:
  name: Gemma2
  annotations:
    - title: "Google Gemma2 9B"
spec:
  implementation: ConversationalAPU
  llm_unit:
    implementation: ToolCallLLMWrapper
    llm_unit:
      implementation: OllamaLLMUnit
      model: "gemma2"
      client_options:
        OLLAMA_HOST = "http://localhost:11434"

@dbrewster @LukeLalor please advise

flynntsang commented 1 month ago

Do y’all have a collection of what all people have made with eidolon?

We're working on it. I'm sorry I have to sign off now for a couple of days due to some family matters. I hope my colleagues can get you on the right track.

atljoseph commented 1 month ago

Thanks. It’s ok! Have a good time with the fam.

Exactly. Not sure what the possible client options actually are. You can put OLLAMA_HOST in there? I would have gone my entire lifetime not knowing that if you hadn’t said it here. Although the equals sign looks out of place for yaml.

atljoseph commented 1 month ago

Gonna count this as done, even if there is some follow up. Thank you

LukeLalor commented 1 month ago

filed #849 and #850