Closed augmen closed 6 years ago
This project is meant to be CPU-compatible but it will be very slow to train a new model using only CPUs. The model is small so 4G RAM will be sufficient.
GPU are bit costly to run. How about 16 GB RAM and 8 CPU cores.
It is possible to train a model but it may take several days. I personally recommend renting GPU servers from paperspace or Amazon AWS if your budget is limited.
ok done paperspace . how about developing a production level application ? is it production ready ?
iguess ht model is not trained on wikipedia ? how about adding a new QA data for training and production grade release ? what formats are to be used ?
The same format as SQuAD will be fine. It is not production ready, but it will be very easy to transform "interact.py" to a web service, embed the model in a larger system and so on.
or what are the changes can be made to DrQA for production purpose ? i mean how to transform "interact.py" to a web service, embed the model to mobile app ? can we add the paragraph retriever functionality also ? can it handle large no. of Queries like 1000 / second ?
It's beyond the scope of this project. If your demand is beyond what you can handle, please hire someone who can to work for you.
any suggestions whom to hire / where to find the resourceful people ?
This is largely depending on where you live. My experience only applies to talent hunting in mainland China and may not be helpful in other countries.
ok
so you have used single.mdl model ? thanks for the guidance
What is single.mdl?
would you like to contribute to the project. we can pay you if you want. single / multitask.mdl are models for the predictions i guess
I have a full-time job and I'm afraid I won't have time for this. I'll close this issue.
Can you please tell me what we to do in order to generate long answers
Hi @niimi1996, There's no easy way to do this. You can add some limits to the decoding process, for example, filtering out answers not longer than 3 words and choose the remaining highest-ranked one, but it'll affect the quality of answers. Training on datasets with long answers will help, but custom datasets are costly.
hi Does it requires GPU acceleration? like pytorch GPU version ? can we develop it to use CPUs ? how many cores and ram is required to run it ?