-
### Deep Learning Simplified Repository (Proposing new issue)
:red_circle: **Project Name** : Hate Speech Detection
:red_circle: **Aim** : The aim of the project is to develop a Natural Language P…
-
# Task Name
Hate speech detection
## Task Objective
Hate speech is considered one of the pivotal challenges afflicting online social media today. This task aims to classify speech into "hate"…
-
Me and a couple of other master degree students are working on a project to use Errudite for the improvement of an hate speech classifier.
After reviewing the code we assume our first step is, that …
-
Hi,
Thank you for creating this dataset and the models. I see that there are pickled files mentioned in all the feature extraction files and the model training files. However, these pickled files n…
gak97 updated
7 months ago
-
The nostr-filter seems to be having issues communicating with strfry if I'm understanding this correctly. Getting `Error: connect ECONNREFUSED 127.0.0.1:7777` in the nf.log when it first starts. In my…
-
the saved model3 is not loading in the file, giving the below error:
Traceback (most recent call last):
**File "app.py", line 28, in
model = tf.keras.models.load_model("C:\\Users\Dell\\Downl…
-
## 一言でいうと
ヘイトスピーチの検出を学習するためのデータセット自体にバイアスが含まれていると指摘した研究。具体的には、黒人の人が話す方言(African American English)が含まれる場合内容と無関係にヘイトと判断されることが多いという(誤検知の確率が通常9%に対し含まれると46%)
![image](https://user-images.githubuserco…
-
Bad documenttaion. not very long errors
Detecting toxicity in outputs generated by Large Language Models (LLMs) is crucial for ensuring that these models produce safe, respectful, and appropriate con…
-
Bad documenttaion. not very long errors
Detecting toxicity in outputs generated by Large Language Models (LLMs) is crucial for ensuring that these models produce safe, respectful, and appropriate con…
-
- Abstract (2-3 lines)
The concepts, characteristics and study techniques of hate speech on social media, particularly on Twitter. Prevalent research and future scenario.
- Brief Description and C…