SoulScribe-AI / ml

0 stars 0 forks source link

Harmful Thought Detector #6

Open Jaishreebala opened 10 months ago

Jaishreebala commented 10 months ago

Goal: This model must be able to identify harmful sentences within the journal entry.

Caveat: If user does show harmful intent and they reach a point where they get the inbuilt response back, and they want to continue journalling, what do we do? Do we allow them to continue journalling? Maybe make a new entry or start fresh

If harmful, show result in a popup in the frontend

MeaganShim commented 10 months ago

https://huggingface.co/vibhorag101/roberta-base-suicide-prediction-phr?

MeaganShim commented 10 months ago

To do:

Saarangagarwal commented 10 months ago

Testing integration with aws sagemaker is more important than creating the model itself given the problems we faced in the past

MeaganShim commented 10 months ago

Look at replicate and see if they have any models for this

Jaishreebala commented 10 months ago

Integrated hf model with ML Trigger Help and tested independently, but the old code wasn't working - sync with BE on this before closing

MeaganShim commented 10 months ago

Next steps - Identify threshold