Open Tengfei09 opened 11 months ago
I'm not sure if I understand which part you want to remove. The serving script basically implements the inference methods defined in the LMServer class. If you don't want to use the HTTP server you can easily modify the llama_serve.py to directly call those methods without spinning up a HTTP server.
Ok, Got it. Thanks for your reply.
By the way, How to change the datatype of the whole model? As I said before, After setting the option --dtype='fp16'
, I still found that some gemm ops run in fp32.
Hi, I'm trying to use your wonderful framework to do inference only. However, I'm not familiar with serving-related settings in your code. How to remove them? or change a bit of code?
By the way, after dumping the HLO graph, I found that the datatype is still fp32 even though I have changed the datatype option.