Closed bgreenawald closed 3 years ago
It was remarked that the particular inappropriate clue noticed was some sort of sexual innuendo.
I ran all of the clues through Azure's Content Moderation API and will upload the results shortly. Do you think we should try and add a user option to include potentially offensive clues or just go ahead and remove them for now?
Attached are two files, the list of clues that the API flagged as potentially sensitive, and the list of clues that for whatever reason it chose not to categorize.
From the lists, I'm removing the following clues:
There may be a few I've missed but that should be most of them. review.txt no_match.txt
Once we're ready to really make this live, we may need to come up with a more robust source of clues. Right now, they just come from various sources I've scraped so the quality and relevance are highly varied.
Ack, I did not see your comment from 17 days ago sorry! - yes definitely for the user feedback option. I'll build that into the designs.
If I ever don't respond to something on github, just text me - I probably just didn't see the email notification github sent me.
ngl, some of these flagged phrases are great - for fun, we could have an inappropriate version and a family friend/sfw version
@JocelynYH What sort of criteria should we be using the determine if a clue is appropriate?