Open Jaishreebala opened 10 months ago
To do:
Testing integration with aws sagemaker is more important than creating the model itself given the problems we faced in the past
Look at replicate and see if they have any models for this
Integrated hf model with ML Trigger Help and tested independently, but the old code wasn't working - sync with BE on this before closing
Next steps - Identify threshold
Goal: This model must be able to identify harmful sentences within the journal entry.
Caveat: If user does show harmful intent and they reach a point where they get the inbuilt response back, and they want to continue journalling, what do we do? Do we allow them to continue journalling? Maybe make a new entry or start fresh
If harmful, show result in a popup in the frontend