issues
search
bytedance
/
lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
Other
3.22k
stars
329
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
LightSeq on GCP
#530
mmcgra21
opened
1 week ago
1
[FeedBack][1.0.8][com.miniclip.eightballpool]-
#529
ap0jwd
opened
6 months ago
4
is Baichuan2 model inference available now?
#528
cg3dland
opened
7 months ago
0
参数格式不对args format wrong
#527
whiteSteelRain
opened
7 months ago
0
Exception: Installed CUDA version 12.3 does not match the version torch was compiled with 12.1, unable to compile cuda/cpp extensions without a matching cuda version.
#526
skill-diver
opened
9 months ago
0
要求C++ 17
#525
2020zyc
closed
10 months ago
1
identifier "__hisnan" is undefined
#524
jimmieliu
opened
1 year ago
4
Is it normal that A10 inference speed is lower than 2080ti?
#523
qinbo23
opened
1 year ago
1
lightseq是否支持clip模型的int8量化?
#522
shhn1
opened
1 year ago
0
Can int8 in pre-training large model ???
#521
zhoumengbo
opened
1 year ago
0
how to resolve xlm-roberta convert fail
#520
520jefferson
opened
1 year ago
0
question about environment
#519
etoilestar
opened
1 year ago
0
[Question] gptj, mpt support.
#518
DongqiShen
opened
1 year ago
0
为什么连给的example也有bug?
#517
Moran232
opened
1 year ago
4
请问lightseq可以支持segmentAnyting的推理优化吗
#516
sanbuphy
opened
1 year ago
1
llama inference test
#515
HandH1998
opened
1 year ago
3
Do you have plans to support token_type_ids?
#514
chenchongthu
opened
1 year ago
0
Is llama inference available now?
#513
frankxyy
opened
1 year ago
1
请问lightseq在推理流程中有gemm调参这一步吗?
#512
frankxyy
opened
1 year ago
0
LLaMA example 结果验证
#511
chenzhengda
closed
1 year ago
0
lightseq' Transformer expects an extra layer_norm on both encoder and decoder level
#509
yuting-wang-1000
opened
1 year ago
0
ls_torch_hf_quant_gpt2_export.py的使用问题
#508
wzh232894
opened
1 year ago
0
Do you consider supporting the chatglm model?
#507
Youggls
opened
1 year ago
0
How to get output scores for each output tokens of LightSeq BART model when inference
#506
quancq
opened
1 year ago
0
什么时候可以支持bloom的几个版本的模型,比如6b的
#505
liuzhipengchd
opened
1 year ago
3
Llama develop (speedup 2.x)
#504
hexisyztem
closed
1 year ago
0
RuntimeError: Ninja is required to load C++ extensions even after pip install ninja
#503
zt991211
opened
1 year ago
1
请问下可以支持llama和bloom推理加速吗
#502
HuiResearch
opened
1 year ago
4
Block ngram
#501
hexisyztem
closed
1 month ago
2
Fix precision overflow problem
#500
hexisyztem
closed
1 year ago
0
Fix gpt
#499
hexisyztem
closed
1 year ago
0
Gpt fix
#498
hexisyztem
closed
1 year ago
0
Wrong encode_output_project_bias_kv_size !
#497
JunchengYao
closed
1 year ago
0
Is there a plan to support the mT5-small model?
#496
qibao77
opened
1 year ago
0
minor changes
#495
hexisyztem
closed
1 year ago
0
Generator compatible with sampling & beam search
#494
hexisyztem
closed
1 year ago
0
Transformer debug
#493
hexisyztem
closed
1 year ago
0
fix compile problem
#492
hexisyztem
closed
1 year ago
0
[WIP] Fix compile
#491
hexisyztem
closed
1 year ago
0
About inference speed compared to TRT16? [急急如律令]
#490
xiao2mo
opened
1 year ago
4
launch gpt embedding with float4
#489
hexisyztem
closed
1 year ago
0
About Vit encoder output consistency during inference?
#488
xiao2mo
opened
1 year ago
4
[Running Fail] Assert failed at transformer_encoder_layer.py at line 342
#487
AlexwellChen
opened
1 year ago
0
Gpt model
#486
hexisyztem
closed
1 year ago
0
en2fr和en2de的模型结构存在差异?
#485
MeJerry215
opened
1 year ago
2
有什么方法将fairseq transformer的weight 转换为 lightseq transformer的weight?
#484
MeJerry215
opened
1 year ago
0
关于wmt14 en2de数据集的问题
#483
MeJerry215
closed
1 year ago
0
Revert "Gpt layer develop"
#482
hexisyztem
closed
1 year ago
0
Gpt layer develop
#481
hexisyztem
closed
1 year ago
0
fix gpt attention layer compile problem
#480
hexisyztem
closed
1 year ago
0
Next