StampyAI / stampy-nlp

NLP microservices for Stampy FAQ and AI Safety Info
0 stars 0 forks source link

Model specific microservices #6

Closed mruwnik closed 1 year ago

mruwnik commented 1 year ago

The models can be accessed over HTTP, below are curl calls:

To deploy a microservice, run ./deploy_model.sh <service name> <huggingface model> <model type>, e.g. ./deploy_model.sh reader-model deepset/electra-base-squad2 pipeline

ccstan99 commented 1 year ago

Thanks! These might just be a documentation detail but I want to clarify:

mruwnik commented 1 year ago

Thanks! These might just be a documentation detail but I want to clarify:

* reader model - returns answer given a question & context ... **not for duplicates**. I followed the naming convention from research papers. Maybe it'll be clearer to rename this to `qa_model` like this hugging face example below from https://huggingface.co/tasks/question-answering?
question = "Where do I live?"
context = "My name is Merve and I live in İstanbul."
qa_model(question = question, context = context)
## {'answer': 'İstanbul', 'end': 39, 'score': 0.953, 'start': 31}
* retriever model - returns encodings given text or list of text. **Additionally can return duplicates `paraphrase_mining` when given a list of strings/titles/questions.**

* literature search - returns encodings. We won't use `paraphrase_mining` here but it should be supported by the model so no harm keeping it.

That does make things a lot clearer :D I changed the reader-model to qa-model and added a question_answering endpoint for encoders