microsoft / TaskWeaver

A code-first agent framework for seamlessly planning and executing data analytics tasks.
https://microsoft.github.io/TaskWeaver/
MIT License
5.28k stars 665 forks source link

Is it possible to use ollama embedding model while using OpenAI model for agents? #351

Closed SingTeng closed 5 months ago

SingTeng commented 5 months ago

Is your feature request related to a problem? Please describe. Is it possible to use Ollama embedding model for plugin selection while using OpenAI model for agents. See my config file blow: { "llm.api_base": "https://xxx.openai.azure.com/", "llm.api_key": "xxx", "llm.api_type": "azure", "llm.api_version": "2023-07-01-preview", "llm.model": "gpt-4", "llm.response_format": null, "llm.embedding_api_type": "ollama", "llm.embedding_model": "nomic-embed-text:latest", "code_generator.enable_auto_plugin_selection": true, "code_generator.auto_plugin_selection_topk": 2, "execution_service.kernel_mode": "local", "planner.prompt_compression": true, "code_generator.prompt_compression": true }

I thought I would need to input ollama api into the config as well? I can't seem to find the info.

Describe the solution you'd like Be able to use Ollama model for plugin auto selection, at the same time using OpenAI models for agents.

liqul commented 5 months ago

I didn't try myself, but I think the embedding model is configured seperately with the model for the agent roles, and so you should be able to do what you want.

You can take a look at this file taskweaver/llm/ollama.py and the config class OllamaServiceConfig. I think you need to configure at least the following one, given that you have already configured the others in your example above:

llm.ollama.api_base = xxxx

Please let me know if this can work.

SingTeng commented 5 months ago

Yes, it works after adding llm.ollama.api_base = xxxx