Closed lzl1456 closed 1 month ago
On going
“”CT-Transformer标点-中英文-通用-large“” 这个CT-tranformer 标点的模型,并不是采用的看未来L长度呀,如果输入text_lengths按每条长度,里面代码就只乘了这个做成的mask,就是看全局的san_M, 有没有跟论文一样的配置代码模型呢? 还有该模型前向20切分输入,是否是最佳值呢
我也想对标点模型进行微调,请问您实现了吗?
实现了一般,效果不行,应该有问题,等他开源吧
❓ Questions and Help
Before asking:
What is your question?
Code
What have you tried?
What's your environment?
pip
, source):