-
虽然readme.md说明由于作者不会nlp所以不引入深度学习框架,但https://github.com/Morizeyao/GPT2-Chinese 有最牛的中文生成模型之一,不需要算法基础,只要简单clone运行即可。产生的内容,可以与现有内容穿插在一起,丰富性更高。
-
-
Once I've removed stopwords using nltk or similar, I want to be able to see the original text snippets and not the ones without stopwords. How can I achieve that?
-
作者是否可以提供本项目相关服务的启动命令,我在执行tokensregex一直出错
-
目标场景是使用paddle识别ocr结果 在另外的流程中使用modelscope的模型进行nlp分析。但似乎只要添加了from modelscope import pipeline, Tasks就报错。 注释掉就没问题。是否包冲突、有无解决优化和解决方案
- 系统环境/System Environment:
```
conda install pytorch==2.0.1 torchvi…
-
The Chinese language does not separate words with a space (and so is Korean, Japanese etc, but I'm not sure how their standard of interacting with math).
So for my students, it is perfectly reasona…
-
你好,我下载了中文jar包放到了facebook开源的DrQA tokenizer对应文件夹下面,把整个文件夹拷到了您开源的DrQA_cn对应目录,处理时报错
process('江泽明是谁?', doc_n=1, pred_n=1, net_n=1)
01/10/2019 10:51:16 AM: [ [question after filting : 江泽明是谁? ] ]
01/10/20…
-
在text_generator和text_generator_raw缺少了训练数据
-
thanks
-
I seem to get some completely unrelated translation results. I don’t know if my use method is wrong or the model effect is poor.