We propose to extend the telegram question-answering faq bot to the whole manual, and use a TF Manual-trained pipeline running on a GPU (using python transformers).
Details
We train a question-answering ML bot on the manual as a dataset. We already had a basic FAQ Bot using CPU and a non-trained pipeline only using the FAQ as potential answers. We can simply start from this and extend to the whole manual, use a GPU instead of CPU, train it on the TF Manual and then use the bot with the trained pipeline.
We can run the bot on main, test or dev net on a dedicated node with a GPU. A Titan-specs node with a decent GPU should suffice and be cost-effective as a dedicated node.
The bot should be able to provide each answers with related URLs from the manual.
End users POV
Use telegram to query anything ThreeFold related, based on the current documentation. The response time should feel like a flowing conversation with someone online.
Proposition
We propose to extend the telegram question-answering faq bot to the whole manual, and use a TF Manual-trained pipeline running on a GPU (using python transformers).
Details
We train a question-answering ML bot on the manual as a dataset. We already had a basic FAQ Bot using CPU and a non-trained pipeline only using the FAQ as potential answers. We can simply start from this and extend to the whole manual, use a GPU instead of CPU, train it on the TF Manual and then use the bot with the trained pipeline.
We can run the bot on main, test or dev net on a dedicated node with a GPU. A Titan-specs node with a decent GPU should suffice and be cost-effective as a dedicated node.
The bot should be able to provide each answers with related URLs from the manual.
End users POV
Use telegram to query anything ThreeFold related, based on the current documentation. The response time should feel like a flowing conversation with someone online.