issues
search
ZhuiyiTechnology
/
WoBERT
以词为基本单位的中文BERT
Apache License 2.0
458
stars
70
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
如何生成词语的embedding
#20
zhaojianhui-zjh
opened
2 years ago
0
tokenizer.tokenize分词问题
#19
js19950726
opened
2 years ago
0
vocab问题
#18
cxj01
opened
2 years ago
0
CLS问题
#17
ysyllrt
opened
2 years ago
0
模型加载
#16
JohnnyYuan93
closed
2 years ago
0
WoBERT+ 按词频排序的词表
#15
Changyu-Guo
opened
2 years ago
0
有关预训练
#14
ZhangHaojie077
opened
3 years ago
0
怎样设成 ckpt文件的
#13
sssdjj
opened
3 years ago
0
转torch模型时,先导出为ckpt模型,是否需要自己导出vocab.txt并修改bert_config.json
#12
baiziyuandyufei
closed
3 years ago
0
关于MLM预测句子中[MASK]的候选词的问题
#11
ouwenjie03
opened
3 years ago
1
数据集
#10
LOST-Atlantis
closed
3 years ago
2
关于WoBERT+模型无法加载
#9
WENGSYX
opened
3 years ago
1
关于unilm文本生成
#8
thinkingmanyangyang
opened
3 years ago
1
test/csl.py里的wobert换成wonezha以后报错
#7
empty-id
opened
3 years ago
2
字embedding生成词embedding这一步的代码在哪?
#6
svjack
opened
3 years ago
1
CUDA error: device-side assert triggered
#5
JeremySun1224
closed
3 years ago
0
请问有pytorch版本的吗?谢谢~
#4
Fourha
opened
4 years ago
1
请问您们是如何在RoBERTa-wwm-ext上继续进行预训练的呢
#3
yangzhch6
closed
1 year ago
1
请问如何增加/修改词表vocab.txt
#2
Crescentz
opened
4 years ago
3
请问wobert词表里有类似bert的[unused]的词吗
#1
moon290
opened
4 years ago
1