These are the resources and demos associated with the tutorial "Hate speech detection, mitigation and beyond" at ICWSM 2021 and AAAI 2022 are noted here.
Social media sites such as Twitter and Facebook have connected billions of people and given the opportunity to the users to share their ideas and opinions instantly. That being said, there are several ill consequences as well such as online harassment, trolling, cyber-bullying, fake news, and hate speech. Out of these, hate speech presents a unique challenge as it is deep engraved into our society and is often linked with offline violence. Social media platforms rely on local moderators to identify hate speech and take necessary action, but with a prolific increase in such content over the social media many are turning toward automated hate speech detection and mitigation systems. This shift brings several challenges on the plate, and hence, is an important avenue to explore for the computation social science community.
We also provide some demos for the social scientists so that our opensource models can be used. Please provide feedback in the issues.
:rotating_light: Check the individual colab demos to learn more about the how to use these tools. These models might carry potential biases, hence should be used with appropriate caution. :rotating_light: