Closed devxpain closed 11 hours ago
Try changing the name to a new one (such as ollama2
) and manually adding models.
- type: openai-compatible
name: ollama2
api_base: http://192.168.1.202:11434/v1
models:
- name: qwen2.5-coder
max_input_tokens: 32768
supports_function_calling: true
Each client configuration has a models
field. For convenience, if you omit the models field, AIChat will use the default models defined in models.yaml.
Try changing the name to a new one (such as
ollama2
) and manually adding models.- type: openai-compatible name: ollama2 api_base: http://192.168.1.202:11434/v1 models: - name: qwen2.5-coder max_input_tokens: 32768 supports_function_calling: true
Each client configuration has a
models
field. For convenience, if you omit the models field, AIChat will use the default models defined in models.yaml.
platform
FieldThank you for considering my previous suggestion. To further improve flexibility while maintaining backward compatibility, I propose the following enhancement:
platform
field in the configuration.platform
field is specified, it explicitly references the platform type.platform
field is omitted, the name
field is used as the default reference for the platform.Here’s how the configuration could look under the proposed system:
- type: openai-compatible
name: ollama
api_base: http://localhost:11434/v1
- type: openai-compatible
name: ollama-remote
platform: ollama
api_base: http://192.168.1.202:11434/v1
In this example:
name
field as the platform reference, maintaining backward compatibility.platform
field, making it clear that this configuration uses the ollama
platform.name
field can still serve as the default reference.platform
field allows users to make the platform reference explicit when necessary, avoiding ambiguity.models.yaml
, where platforms are categorized explicitly.platform
is defined, it takes precedence and is used to identify the platform settings.platform
is omitted, the name
field is used as a fallback.This enhancement provides more flexibility and aligns with the intuitive understanding that the name
is simply an identifier, while platform
determines the underlying platform functionality.
Thank you for considering this suggestion! Let me know if you’d like further details or examples.
Introducing a new field complicates things, we will not accept the proposal.
To populate client models, Aichat first checks the models
fields of the configuration.
- type: openai-compatible
name: ...
models:
- name: qwen2.5-coder
max_input_tokens: 32768
supports_function_calling: true
If nothing is found, Aichat will check the client type, so the following configurations will work.
- type: claude
name: claude1
api_key: ...
- type: claude
name: claude2
api_key: ...
The openai-compatible client is speical, so we have to check the name.
- type: openai-compatible
name: ollama
This project has been running for a long time, and only you have required this feature. Clearly, this is an edge case. We don't think it's worth introducing a new configuration item.
To resolve the issue, we will check the name using startsWith instead of using equals.
After testing the configuration:
- type: openai-compatible
name: ollama2
api_base: http://localhost:11434/v1
models:
- name: nomic-embed-text
type: embedding
max_tokens_per_chunk: 8192
default_chunk_size: 1000
max_batch_size: 50
The models field are not appearing in the .model <tab>
list.
Because .model
only list chat models, but nomic-embed-text (type: embdding
) is embedding models
Is your feature request related to a problem? Please describe. I have configured two
ollama
setups in theconfig.yaml
file with differentapi_base
values like so:When using the
.model <tab>
command, I can see bothollama
setups listed. However, when I try to use a model from the secondollama
(e.g.,qwen2.5-coder:14b
), it always attempts to use the firstollama
setup, even though I select the secondollama
setup. The error I receive is:This suggests that the system is always using the first
ollama
setup despite selecting the second one, likely because both setups share the samename
("ollama").Describe the solution you'd like I would like the ability to differentiate between multiple
ollama
setups by either allowing aliases or providing a way to refactor thename
field. This would allow eachollama
setup to be used independently without the system defaulting to the first one.Describe alternatives you've considered Currently, I have tried changing the
name
field to something unique, but this breaks the functionality because it seems like thename
field is tied to the model type. I haven't found a way to alias the names without breaking the setup.Additional context It would be helpful if there was a way to either:
name
so that eachollama
setup can have a unique alias without changing the underlying type.name
field entirely to allow custom names for eachollama
setup.Let me know if you need further details. Thanks for considering this feature request!