Open wolfpixels opened 1 year ago
It's kinda a good idea. But how do you define "most humanlike interactions"? Like what is the benchmark and methodology you would use to rate projects?
You can make a PR to propose your rating thingy
OpenAI track
A few people trying each and then taking the average of a scale from 1-5 should work. I'll make a PR when I'm back tonight.
This is so hard to do that I don't think I'm qualified, but I did find this useful repo as a reference: https://github.com/manyoso/haltt4llm. This is a fantastic idea, and I greatly appreciate it.
Okay just take your time
hey just jumping back here to say: I looked at how to rank these, it seems to be a common problem for many people right now. - From this video: https://www.youtube.com/watch?v=4VByC2NpV30 about Vicuna (apparently 90% of ChatGPT quality).
With the rate of development in this field, I think it's better to let the best projects propagate by the collective word of mouth of the AI community, to save us time.
I'm happy to add projects which seem promising. Do you have any way I can send you a msg?
I did a little bit research on this, apparently Vicuna works best so far, meanwhile for Chinese users, ChatGLM seems to be the best.
Sorry I don't have a way to send a msg. This Github thread is the only way of communication.
Interesting update, using GPT4 to rate other LLMs on performance. ais rating other ais haha, wild.
I think doing this is easy(ish) as well, potential option for this repo. I'd like to get updates on this project, as in hear about the development of open source LLMs, have you considered making a newsletter or something? Checking Github is quite tiresome. I'd like to get an email on the updates. This repository would also be a great place to market it. I'm confident other people would be interested in such a thing too
I think using GPT-4 to rate other model is a good way to show people how other models are different from OpenAI's models.
I would recommend that anyone interested in finding the best open source LLM with a limited or commercial-friendly license but lacking the time and energy to stay up-to-date with the latest AI news should periodically check the https://chat.lmsys.org/ website. They have deployed multiple SOTA models that you can not only try but also evaluate. Additionally, they provide a leader board with convincing statistics and a comprehensive list of open-source models ranked by score.
@nicognaW Ooh it has a battle mode and a leaderboard, too! https://chat.lmsys.org/?leaderboard
lmsys's Elo rating approach is interesting. See also https://gpt4all.io/index.html, "Performance Benchmarks" (displayed only on desktop, not on mobile) https://www.mosaicml.com/blog/mpt-7b, Table 1 for some comprehensive benchmarking results.
Another evaluation with a leaderboard: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard.
BTW, given the era where open-source language models are flourishing, the list in this repository may not be up-to-date. So it's recommended also to refer to the leaderboards mentioned in this issue for the latest information.
Hey,
Would be useful to include some sort of rating to track the best or most humanlike interactions.
If you need someone to help manage, I’m down to help