As for cloud models, he said "Honestly pretty subjective on preference between cloud LLMs. They're all closed source so tough to get much info on them. This leaderboard is generally respected: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard. I use GPT 4o, Gemini 1.5 Pro, and Claude 3.5 Sonet with Pieces (general QA, RAG, and Live Context) and I honestly can't notice much of a difference. Outside of those the performance will likely drop off"
We need to provide some updated information on which models we suggest people use within Pieces so they have better expectations of our local models. Brian outlined each local model and his thoughts here: https://docs.google.com/spreadsheets/d/1wOZ03a-Z_x95eVdGCLTN1cFfgWHKISuGkkCbkqr2wzU/edit?usp=sharing
As for cloud models, he said "Honestly pretty subjective on preference between cloud LLMs. They're all closed source so tough to get much info on them. This leaderboard is generally respected: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard. I use GPT 4o, Gemini 1.5 Pro, and Claude 3.5 Sonet with Pieces (general QA, RAG, and Live Context) and I honestly can't notice much of a difference. Outside of those the performance will likely drop off"