Open Keyrxng opened 1 year ago
Name | Link |
---|---|
Latest commit | 4134d3f994fdd6c7e871affbbfeeb56c22e07224 |
Latest deploy log | https://app.netlify.com/sites/ubiquibot-staging/deploys/6539455800ea8f0008f901ab |
I think it's lacking a lot compared directly against gpt, plus it's required us to bring in another package into the build which I know you want to avoid at all costs.
To clarify, in my opinion, Perplexity is superior as a research agent, but not necessarily as a thinking agent. It is great at retrieving information quickly from many sources, and summarizing them.
However if asking it to think through something logically it is worse than GPT4.
If there was a better way to leverage these capabilities in an ergonomic way for the user that would be ideal.
so as is right now we are having GPT dictate the context that gets fed into Perp.
This seems like a nice conclusion. Perhaps using Perplexity as the researcher and ChatGPT as the thinker?
Maybe using Perplexity for /ask
adds unnecessary complexity?
Perhaps using Perplexity as the researcher and ChatGPT as the thinker?
This would kick ass I agree. I'm unsure how best to apply it efficiently with the context size, if the token limits were flipped that would be workable. Until such times they release the bigger context window I think using it adds too much complexity to /ask
where gpt3.5 performs just fine on it's own.
But I could see feeding perp all of our additional context and having it research and summarize, then gpt acting on the information that would probs be v effective.
Resolves #866
Quality Assurance:
So after a lot of fucking around I managed to get our tokens estimates pretty dead on, but adding in the additional context from linked sources is throwing it off again
I tried to create formats where if we had the space we'd select the next format up and try to consume tokens that way but it wasn't very fruitful, so as is right now we are having GPT dictate the context that gets fed into Perp.
I think it's lacking a lot compared directly against gpt, plus it's required us to bring in another package into the build which I know you want to avoid at all costs.
If funded I'd spend the time and hack together a TikToken based tokenizer for use