-
mldl@mldlUB1604:~/ub16_prj/DuReader$ cat data/raw/trainset/search.train.json | python3 utils/preprocess.py > data/preprocessed/trainset/search.train.json
Traceback (most recent call last):
File "u…
-
When running the basic example on SQUAD
```python examples/train_model.py -m drqa -t squad -bs 32```
Throwing this.
```[ training... ]
/content/DuReader/data/ParlAI/parlai/agents/drqa/layers.py:…
-
- 版本、环境信息:
1)PaddlePaddle版本:1.6.2
2)GPU:Tesla V100, NVRM version:396.37, CUDA Version 9.2.14, cuDNN Version: 7.3.
3)系统ubuntu 16.04.3
4)Python版本号: 3.7
5)显存16160MiB
- 错误描述…
-
测试用的数据可以运行,当使用全量的dureader数据后,仅使用zhidao.train.json这一部分作为训练集,运行
python3 cli.py --prepro 后内存错误,我是32G的机器,请问需要多大内存才能用zhidao+search的全量训练集,您的机器配置是什么?
-
-
-
run TensorFlow的时候Utils下缺少文件
from utils import compute_bleu_rouge
from utils import normalize
-
https://github.com/xuezhong/models/tree/machine_reading_comprehesion/fluid/machine_reading_comprehesion/DuReader
数据预处理出错:
$ cat data/raw/trainset/search.train.json | python utils/preprocess.py > dat…
-
![image](https://user-images.githubusercontent.com/5653261/39615796-a6c755bc-4faa-11e8-828a-b5539596f005.png)
-
It's a little difficult to find Chinese dataset suitable for training decaNLP. Right now, all I have is: 1, douban movie review for sentiment analysis; 2, webqa from baidu. Is there any other data whi…