Closed diessica closed 5 years ago
Hi @diessica ,
You're 100% right, it was left like that just for the fancy demos ^^
Tune back in Monday and hopefully the project will be microserviced and ready for a fab ass front-end. Also are you elm aware?
Great that it makes sense!
Yes, I am Elm aware and would love to play with it. However, giving my 20 cents here: I don't think there is an explicit use case for it here. Plain JavaScript will do, for ease of access (for anyone to contribute) and it would perform obviously better for a simple extension - this is not an app, at least not yet. I would only adopt a library or framework given a reasonable problem statement - same for a microservice architecture.
Sure I hear you, I was more thinking Elm architecture. Your right in terms of simplicity and its pretty boiler platey but I am also keen to focus on maintainability as soon as possible.
The backend now returns a 1 or a 0 (1 being nasty, 0 being nice) so....go front-end loco ;)
Also send me an email and I'll add you to the slack group
Now returns json with score: 0 or 1
The
/predict
endpoint returns a sentence based on the score instead of the score itself, although it is documented otherwise. https://github.com/malteserteresa/stop-it/blob/master/deploy.py#L21I suggest that we return the score, as per documentation. Some reasons I can think of:
Example
Current front-end.
Front-end after proposed API change.
I believe this is leads to a more scalable and mature design for the API, providing flexibility, purity and with no language bias.
Let me know if there is something I am missing, especially context regarding the decision made for the current API design.