bytedance / lightseq

LightSeq: A High Performance Library for Sequence Processing and Generation
Other
3.21k stars 329 forks source link

如何单独使用Transformer encoder 和decoder infer #310

Open 13354236170 opened 2 years ago

13354236170 commented 2 years ago

您好,示例里边提供了标准Transformer 推理的示例。请问怎样单独使用Transformer 的encocer 和decoder 推理呢? import lightseq.inference as lsi model = lsi.Transformer("transformer.pb", 8) output = model.infer([[1, 2, 3], [4, 5, 6]])

并且找到了具有推理函数的TransformerDecoder ,应该可以如下使用。请帮忙看下是否这样使用呢? import lightseq.inference as lsi model = lsi.TransformerDecoder("transformer.pb", 8) output = model.infer(decoder_input,decoder_mask) 而且这个decoder 的输入参数和训练时候(run.py)中的参数输入不一致呢?请问是怎么回事呢? output = model.decoder( predict_tokens[:, -1:], encoder_out, encoder_padding_mask, cache )

单独的Transformer encoder infer 没有给出,这个可以单独进行推理加速吗?

Taka152 commented 2 years ago

lsi.TransformerDecoder is implemented but lsi.TransformerEncoder is not. Your script is right.

I recommend using LSTransformerEncoderLayer in lightseq.training to accelerate modules of your model rather than accelerate the whole model in lightseq.inference which may not be supported. Modules in lightseq.training are more flexible.