Closed paulVu closed 11 months ago
The dataset isn't public yet, so I don't think this is feasible.
The dataset isn't public yet, so I don't think this is feasible.
Seems like they released lot of it now.
The datasets appear to be available now:
In my tests, both are much more effective than the latest models available in GPT4All, so it would be worth adding them.
I have been contributing to open-assistant for a while now, and as of yesterday they released their RHLF model. which would definitely be worth adding to gpt4all
@salt431 Thank you very much to you and your colleagues for your awesome work on open-assistant! It is truly impressive!
IMHO from my tests as an AI and consciousness researcher, I prefer oasst-sft-7-llama-30b-xor because it is more flexible (ie, it can be jailbroken and play a role), which is very useful to make it specialize into a topic. In other words, it's better IMHO for prompt engineering.
On the other hand, the new RHLF model has similar capabilities, except it is much more difficult to be jailbroken, which is better for most users who don't do prompt engineering.
Both models are based on sft 7 and they are much better than sft 6, both would be awesome additions to GPT4ALL IMHO, they both offer awesome conversational capabilities, with rare hallucinations and very accurate and extended outputs (similar to ChatGPT 3.5 IMHO! Which matches the performance described in the paper), they truly are a major leap forward for open-source AI assistant models! Great job on this!
PS: I'm very very sorry about the duplicate post, it seems GitHub is experiencing some issues right now, it posts in duplicates with a huge lag, and it cannot delete messages either. /EDIT: after some time, they could solve the issue, I now removed my duplicate reply!
Stale, please open a new issue if this is still relevant.
https://github.com/LAION-AI/Open-Assistant What do you think about this?