Open mkaratayev opened 1 week ago
Hi,
Since the online inference service provider will automatically provide the template for inference, it cannot be used for generating the instruction (step 1). However, you can use the online inference platform to generate the response (step 2)! You can do this by setting api=True
and providing a Together api key in gen_res.py
.
Hi,
Thank you for the excellent project! I have a question: is it possible to use the online inference model with your pipeline?
Thanks in advance!