Closed JulianOestreich90 closed 2 months ago
The error you're encountering is due to the model name 'alias-fast' not being recognized. This name does not match any of the models listed in the system's ALL_AVAILABLE_MODELS
dictionary, which includes models from GPT-4, GPT-3.5 Turbo, and other supported models.
To resolve this issue, ensure you use a model name that is recognized by the system, such as "gpt-4", "gpt-3.5-turbo", "text-davinci-003", or any other valid model name from the supported list. If 'alias-fast' is an alias or a custom model, you'll need to map it to a supported model name or adjust your implementation to recognize it correctly.
For more details on supported models and how to properly configure them, you might want to check the documentation or the relevant code sections in the repository, specifically around model name handling and the openai_modelname_to_contextsize
function.
But i want to access other models through an OpenAI compatible API. The API i want to use is Helmholtz Blablador.
@JulianOestreich90 I would use the OpenAILike class for this
pip install llama-index-llms-openai-like
from llama_index.llms.openai_like import OpenAILike
llm = OpenAILike(
model="alias-fast",
api_base=https://helmholtz-blablador.fz-juelich.de:8000/v1",
api_key="fake",
temperature=0.0,
# I don't actually know if this is true or false for you -- is the messages api supported for your model?
is_chat_model=True,
)
Question Validation
Question
I am using an LLM from an OpenAI compatible API and I load it as a Llangchain LLM:
The model works perfectly fine with
However if i am trying to initialize a query engine from a VectorStoreIndex like this:
I am getting the error: