bytedance / lightseq

LightSeq: A High Performance Library for Sequence Processing and Generation
Other
3.21k stars 329 forks source link

Can i use lightseq to speed up the model of fairseq Transformer Decoder ? #154

Open ismymajia opened 3 years ago

ismymajia commented 3 years ago

Can i use lightseq to speed up the model of fairseq Transformer Decoder ?

I already export the Transformer Decoder language model trained by fairseq , now i want to speed up the model by light seq c++ api , how should i do ? Have any c++ demo? @Taka152

Taka152 commented 3 years ago

You can check this example https://github.com/bytedance/lightseq/blob/master/examples/inference/cpp/gptlm_example.cc.cu to use in cpp. If you want to serve in a more powerful server, check here. To use the triton server, you may need to do exploration, we haven't provided an end2end example for now

ismymajia commented 3 years ago

this example only have code. What is the model like ? What is the difference between your model with my model? Can u give me a commpletly run example ?@Taka152

ismymajia commented 3 years ago

./v1.0.0_libs/transformer_example.fp32 ./v0.0.1_gptlm.pkg/gpt.pb ./v0.0.1_gptlm.pkg/test_case Segmentation fault (core dumped)

i test your example, but error occurs.