Closed samuelrince closed 20 hours ago
~I created feat/add-perplexity
branch to work on this topic.~
~In the models data file, I added perplexity models as provided by openai
in order to leverage the openai tracer in its current state.~
~We'll probably need to update the tracer so it can work with dynamic provider
values.~
~I flagged the perplexity models with model_architecture_not_released
but not the open source ones.~
~Parameter count of 8x7B
models, parameter count is guessed to be similar as mistral-small
model~
Edit : I deleted this branch since the code is old and that I could never get an API key to actually test it. I think it's safer to start from scratch later
@samuelrince @LucBERTON Quick question here: why do we need to modify the tracer for another endpoint? I think that adding the supported models for the OpenAI provider would be enough
Yes we can keep the OpenAI tracer, we just need to detect the right provider (looking at the endpoint). Let's not mix Perplexity models with OpenAI.
PS: I would not implement it for now unless it is asked by one of our users :)
Description
Add perplexity.ai LLM provider.
Solution
Perplexity uses the same API as OpenAI, meaning the OpenAI python client is compatible with their service, and it only requires changing the API endpoint.
Client example: https://docs.perplexity.ai/docs/getting-started Supported models: https://docs.perplexity.ai/docs/model-cards
We need to identify when another provider is used in the case of the OpenAI client. Plus support and register the models that they provide through their API.