LAION-AI / Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
https://open-assistant.io
Apache License 2.0
37.05k stars 3.23k forks source link

Allow privileged editors to edit assitant replies, improving spelling/grammar/style #1005

Closed mashdragon closed 1 year ago

mashdragon commented 1 year ago

As with all crowdsourcing efforts, there is the problem of quality control. Usually, crowdsourced decisions are achieved via consensus by many users performing the same task. In our case, there are some assistant replies that have good content but include language mistakes. Writing takes a lot of time and effort, so we wouldn't users to downvote or remove any of these responses.

Therefore, I propose creating an editor role that is allowed to adjust assistant responses. This can be used to correct spelling and grammar as well as to create a consistent style.

Because this role is given a lot of power, it would probably have to be a privileged position, i.e. only available to a handful of users who have demonstrated their language skills and are all working to achieve the same goals when it comes to style. Editors ought not to change the meaning of the replies but are given the freedom to make corrections and reorganizations to fit the style we agree upon.

As we incorporate language models into the application, editors could also help make adjustments to good model outputs.

I think a consistent style is useful for an assistant. A carefully chosen style will give the assistant a premium and intelligent feeling.

A few ideas for things if we decide to pursue something like this:

olliestanley commented 1 year ago

I think this is a good idea. I have seen a significant number of messages which are very good apart from having small typos or grammatical issues.

Sobsz commented 1 year ago

I've been thinking about that recently, and I came up with these potential solutions:

I would also like to note that while I support removing typos and such from otherwise professional text, I would like casual-tone responses to stay. I'm sure I'm not alone in disliking ChatGPT's overly verbose style (in Polish, we call it "lanie wody"), and I'd hate for that to be the only tone OA can reliably do just because everyone already knows it (Schelling point). As I ended up concluding in my issue about customizable styles, a few-shot (or soft) prompt could/should suffice for ensuring consistency without removing variety.

olliestanley commented 1 year ago

I think it would be pretty computationally expensive (and with no guarantee of effectiveness) to run an LM on so many messages to try to autocorrect errors. Allowing anyone to edit and create copies is interesting but not sure how well we could avoid the possibility of flooding with many similar edited messages

mashdragon commented 1 year ago

I also like the idea of allowing anyone to create an edit. However, instead of feeding the edit back into the system, we could potentially create a new task specifically for reviewing edits. To make reviewing edits faster, we could style the comparison between the original text and the edited text as a colored inline diff. That way, reviewers could quickly compare the changes.

I think a colored diff would be necessary so volunteers aren't driven insane by comparing near-identical prompts without visual guidance.

lainisourgod commented 1 year ago

I like the idea of "Editing" task better than creating a new message without typo.

e.g. if one user wastes 1 minute reading and reviewing a message with 1 typo, and we show it to 3 users, then creating a new message wastes 3 user-minutes on completing a very similar task, that will have a minimal marginal impact on model

I also plus to colored diff.

Asmageddon commented 1 year ago

Agreed with @mashdragon here- allow arbitrary edits as long as they do not change the meaning in major ways that would invalidate the query-reply-query-reply chain, and add a task for reviewing them. There's so much great content with absolutely awful grammar/spelling/etc, or near-correct answers that could be easily fixed without having to discard them altogether.

horribleCodes commented 1 year ago

Unless you restrict editing to unlabeled messages, there could be a mismatch between content and labels. Personally, I give messages with typos or poor grammar a lower quality score, so if someone were to edit the message I reviewed, but didn't adjust my classification, it wouldn't reflect the actual quality of the new message anymore.

mashdragon commented 1 year ago

Yes, edited messages would need to be labeled again. Good suggestion.

horribleCodes commented 1 year ago

Ideally, we could request the people who rated the previous version of the message to reassess the edited version. Some replies are not easy to verify, so if someone already went through the trouble of confirming it, we shouldn't waste everyone's time looking for new people to provide feedback.

King-Darius commented 1 year ago

Unless you restrict editing to unlabeled messages, there could be a mismatch between content and labels. Personally, I give messages with typos or poor grammar a lower quality score, so if someone were to edit the message I reviewed, but didn't adjust my classification, it wouldn't reflect the actual quality of the new message anymore.

Yes the corrected messages would need to go through some process again.

King-Darius commented 1 year ago

We need to write up a detailed breakdown of how this feature should be added. Can someone write up a detailed proposal so we can begin development. This is a feature we need to add urgently so that the whole collection process doesn't take months of correcting data.

King-Darius commented 1 year ago

We need a proposal written and for a few people to take this on soon (I hope).

You might get some ideas from: https://opensource.com/article/20/3/open-source-writing-tools https://rigorousthemes.com/blog/best-open-source-grammarly-alternatives/

King-Darius commented 1 year ago
  1. The editing procedure should become a new task category. The user should be able to see the whole conversation tree at this stage, but they can only edit one assistant reply per task. All ratings and labels should be reset once the user has completed this task and the new version be sent for reclassification.
  2. There should also be Spelling and Grammar checking functions added to the text input anyway.
  3. We also need to add a new task for checking that grammar and spelling corrections are not vandalism. I suggest having the old version (change the background colour to orange) beside the new version (change the background colouring to green). Three users will need to do this before the updated version is added to the database to be used.
  4. I suggest non-destructive editing via versioning of the revised assistant replies. That way we can keep the old versions just in case something unforeseen happens. Notwithstanding any issues, then the revised versions only can be for the training.
EternalRecursion121 commented 1 year ago

Once we have a functioning model, could it be possible to use the model itself to evaluate edits, alongside humans? This would also have the additional benefit that messages would converge towards a more consistent style.

mashdragon commented 1 year ago

Once we have a functioning model, could it be possible to use the model itself to evaluate edits, alongside humans? This would also have the additional benefit that messages would converge towards a more consistent style.

I am not sure. I think once inference responses are included in the pipeline, we wouldn't want the model to review itself, since the value in editing would come from humans spotting its mistakes. But for simple mistakes (spelling, etc.) on responses written by humans, it might be able to fix those.

Markus40 commented 1 year ago

In order to limit the number of edits (and thus the subsequent labelling work) or controversial changes, users could first be asked to mark problematic text passages and to rate markings made by other users. This can be done relatively quickly, especially when it comes to independent simple things like spelling mistakes.

Then, in a second step, the problematic text passages for which there is consensus could be improved and then the message will be relabelled.

The marked text passages can also be used purely for orientation, so that it is easy to find all the errors when correcting them or to point out to the user which sections should rather be left as they are.

someone13574 commented 1 year ago

I can work on this.

olliestanley commented 1 year ago

I can work on this.

Great. I suggest we start by defining "privileged editors" as moderators

The backend should then support a message editing endpoint which somehow retains the history of the message as well as probably logging the (backend) ID of the user who made the edit

GraemeHarris commented 1 year ago

@someone13574 Let me know if you'd need any help / want to pawn off any of the work my way too 🙇