Define a "dictionary" (moderated texts), which builds a dynamic algorithm that detects comments with words considered offensive to the community
Hide offensive comments to the public, except under the Comments tab under Users#show, where the comment can be seen along with a Moderated tag
The comment's author can edit their offensive comment until no offenses are detected; the recorded offense is also removed from the backend so that no action can be taken upon it
An admin can "decline" or "confirm" these offenses —the former meaning the moderated comment will be shown again, while the latter hides the comment permanently (shadowban-like behavior)
Visual Changes
Notice when a comment including an offense is created:
Comments section (with offensive comments) after reload:
Comments tab under Users#show when a comment is offensive:
View when editing an offensive comment:
Comments tab under Users#show after an offensive comment is corrected:
Comments tab under Users#show for another user with offensive comments other than the currently logged-in one (this is also what an user would see if the comment is deemed offensive by an admin):
Admin panel for moderated words:
Admin panel for moderated words when a word has related offenses:
Admin panel for actions that can be taken upon offenses:
Admin panel when offenses have been moderated:
Comments tab under Users#show when an comment is deemed as non-offensive by an admin:
Notes
This change would only affect comments once merged, and after a "dictionary" is defined; to moderate previous comments, a Rake task would be needed
This PR is only aimed at comments, so while some logic from this feature is generic in case is ported somewhere in the future to other resources that are considered moderable, some work (i.e.: define a Moderable concern) is needed to fully abstract said behavior
Objectives
Define a "dictionary" (moderated texts), which builds a dynamic algorithm that detects comments with words considered offensive to the community
Hide offensive comments to the public, except under the
Comments
tab underUsers#show
, where the comment can be seen along with aModerated
tagThe comment's author can edit their offensive comment until no offenses are detected; the recorded offense is also removed from the backend so that no action can be taken upon it
An admin can "decline" or "confirm" these offenses —the former meaning the moderated comment will be shown again, while the latter hides the comment permanently (shadowban-like behavior)
Visual Changes
Notice when a comment including an offense is created:
Comments section (with offensive comments) after reload:
Comments
tab underUsers#show
when a comment is offensive:View when editing an offensive comment:
Comments
tab underUsers#show
after an offensive comment is corrected:Comments
tab underUsers#show
for another user with offensive comments other than the currently logged-in one (this is also what an user would see if the comment is deemed offensive by an admin):Admin panel for moderated words:
Admin panel for moderated words when a word has related offenses:
Admin panel for actions that can be taken upon offenses:
Admin panel when offenses have been moderated:
Comments
tab underUsers#show
when an comment is deemed as non-offensive by an admin:Notes
This change would only affect comments once merged, and after a "dictionary" is defined; to moderate previous comments, a Rake task would be needed
This PR is only aimed at comments, so while some logic from this feature is generic in case is ported somewhere in the future to other resources that are considered moderable, some work (i.e.: define a
Moderable
concern) is needed to fully abstract said behavior