-
你好,我按[here](https://github.com/zjunlp/DeepKE/blob/main/README_CNSCHEMA.md)中的指示完成了DeepKE-cnSchema开箱即用版的环境配置。对于一些简短的句子,往往只有两个实体、关系词明确的情况下,程序可以正确预测其中的关系。但是对于稍长、实体略多的情况,这个关系抽取给出的结果比较离谱,而且置信度还挺高。
这里是我pr…
-
你好,请问训练好的模型应该是bert.ckpt文件吧,这个文件请问要怎么用来进行预测呢,这个是pytorch保存的模型文件吗
-
运行sh脚本总会出现未识别的参数**main.py: error: unrecognized arguments: --accum-freq=1**,脚本和示例一模一样
```
`usage: main.py [-h] --train-data TRAIN_DATA [--val-data VAL_DATA] [--num-workers NUM_WORKERS] [--logs LOGS] …
-
D:\Bert-Chinese-Text-Classification-Pytorch\pytorch_pretrained\optimization.py:275: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the follo…
-
谢谢您的代码,数据处理完后,运行train.py,有配置json和预训练模型,我是不是nlp方向的的小白,想试试,求指点,这个目录从哪里获取
-
将输出类别个数等于label数
_Originally posted by @daonna in https://github.com/649453932/Bert-Chinese-Text-Classification-Pytorch/issues/17#issuecomment-569576271_
我这边也是类似问题,但是就是主要在于class中放了22行,并且也是0-21数labe…
-
Using Context-word-enbedding augmenter:
English:
```python
text = 'hi how are you'
context_aug = naw.ContextualWordEmbsAug(
model_path='bert-base-uncased', action="substitute")
augmented_t…
-
hi @nreimers , it's a nice repo. When I read your code in training_stsbenchmark_bilstm.py. I want to test the performance of bert + bilstm, but maybe there is a bug in lstm. I have read all issues abo…
-
Hi, thank you for your great work.
distiluse-base-multilingual-cased has one more dense layer compared to the pool-only model. How is this dense layer added?
We are constructing a Chinese long text…
-
The log is as follow:
0%| | 33792/407873900 [00:29