-
下载`chinese_pretrain_mrc_roberta_wwm_ext_large ` 到本地
参数为:
```
model_name = "roberta_wwm_ext_large" # "chinese_pretrain_mrc_macbert_large"
model_type = 'bert'
threads = 24
eval_batch_size = 64
…
-
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 550. GiB for an array with shape (28235788,) and data type |S20921
How did you solve this mistake? The pre training corpus is read incorre…
-
## Description
Since the **INormalization** layer was added in TRT8.6, I do some tests with the fp16's accuracy:
1. First, I use huggingface‘s bert-base-cased, exported it to onnx(opset17). Then …
-
I've almost finished to build up [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto/tree/dev) Treebank, and now I'm trying to make a Classical Chinese mod…
-
## pycorrector
- github地址:
- 采用的方案
1. 基于规则
- 中文纠错分为两步走,第一步是错误检测,第二步是错误纠正;
- 错误检测部分先通过结巴中文分词器切词,由于句子中含有错别字,所以切词结果往往会有切分错误的情况,这样从字粒度和词粒度两方面检测错误, 整合这两种粒度的疑似错误结果,形成疑似错误位置候选集;
- 错误…
-
I have trained the CORD dataset as per the "example.yaml" file. F1 scores seem to be excellent (with the CRF network).
But when I was trying to create the predictions, It was not predicting anything…
-
Post your questions here about: “[Text Learning with Sequences (Links to an external site.)](https://docs.google.com/document/d/1vHoYMFH-53UpE528xv_-xhSrkjUELI7ihfXmz3J_As4/edit?usp=sharing)” OR “[Tex…
lkcao updated
2 years ago
-
我用BERTSUM来跑中文,跑出的ROUGE得分非常低,发现问题是模型不能够正常生成中文的摘要,请问有人有遇到类似的问题吗
-
from tqdm import tqdm
from textattack.loggers import CSVLogger
from textattack.attack_results import SuccessfulAttackResult
from textattack import Attacker
from textattack import AttackArgs
from …
buthi updated
9 months ago
-
[2023-10-31 13:57:59,636][deepke.relation_extraction.standard.tools.preprocess][INFO] - clean data...
[2023-10-31 13:57:59,637][deepke.relation_extraction.standard.tools.preprocess][INFO] - convert r…