[ ] have a fast API that returns funny messages of different types (failures, loading, many other use cases...)
[ ] the API SHOULD check profanity stuff
[ ] the API should use OpenAI or any other text <-> text model (huggingface GPTJ, T0pp, whatever)
[ ] probably consumers will not call in an online fashion? too slow? or can we make it fast enough? otherwise platforms can just query a bunch at launch time and cache it in memory for later usage
Is your feature request related to a problem? Please describe.
In the past, I generated in an offline fashion messages using OpenAI Codex
https://github.com/langa-me/langame-app/blob/main/lib/providers/funny_sentence_provider.dart
But it's obviously not dynamic.
Describe what it takes to consider the task done