ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
MIT License
16.28k stars 1.52k forks source link

Token usage in API #398

Closed Melodeiro closed 1 month ago

Melodeiro commented 1 month ago

Would be very grateful if you add token usage information in API responses to get hint on request costs

ItzCrazyKns commented 1 month ago

We have multiple different providers and its not very easy to implement pricing for all of them, Perplexica is intended to be used with local providers like Ollama, vLLM, Llama.cpp which only use your compute power.

Melodeiro commented 1 month ago

I mislead you with the word "cost" i meant just a token count of course.

Melodeiro commented 1 month ago

@ItzCrazyKns could you please reopen if my clarification changed your mind? I just meant sending token usage that you are getting from langchain https://js.langchain.com/v0.1/docs/modules/model_io/chat/token_usage_tracking/

ItzCrazyKns commented 1 month ago

Only some of the model providers return the token amount, some of them don't so it would be weird. Additionally, It is meant to be used with local models via Ollama, vLLM, etc so you don't need to worry about the token usage there.

Melodeiro commented 1 month ago

Thats sad because this project is useful with online models as well