Open matalab opened 8 months ago
How are models listed in def _c(self, prompt: str, model: str) of labs.py related to models listed on ? Is it updated or needs to be updated with current models in Perplexity AI?
def _c(self, prompt: str, model: str) -> dict:
assert self.finished, "already searching"
assert model in [
"mixtral-8x7b-instruct",
"llava-7b-chat",
"llama-2-70b-chat",
"codellama-34b-instruct",
"mistral-7b-instruct",
"pplx-7b-chat",
"pplx-70b-chat",
"pplx-7b-online",
"pplx-70b-online"
]
self.finished = False
self.history.append({"role": "user", "content": prompt, "priority": 0})
self.ws.send("42[\"perplexity_playground\",{\"version\":\"2.1\",\"source\":\"default\",\"model\":\"" + model + "\",\"messages\":" + dumps(self.history) + "}]")
I have pruchased Pro subscription for Perplexity AI. Will this llibrary automatically utilize models available for Pro users or I need to specify it in python code? If it is the latter, how to do it?
As a PRO Perplexity user I can choose AI models listed in the user account settings control panel, where I can choose between: "Default", "Experimental", "GPT-4 Turbo", "Claude 3 Sonnet", "Claude 3 Opus", "Mistral Large". I have detected that "Claude 3 Opus" gives me best results. Can you please clarify, is there any relation between what I choose in user account settings and what model will utilize this python library? For example, for the official Perplexity API, it does not - there is no relation between user settings (and corresponding web app) and API models. Is it the same for this library or not?
Is this library using the most recent Perplexity AI models? If not, how to define model(s) to be utilized? Asking because I noticed that I get different result than when using Perplexity AI web chat...