Closed lmtoan closed 5 years ago
Hello! Appreciate your work on this.
In the preprocess/process.py, you mentioned using Jieba for tokenizing -zh words but I don't see it implemented there. Could you help clarify?
preprocess/process.py
its done in preprocess/tokenizer.py
preprocess/tokenizer.py
Hello! Appreciate your work on this.
In the
preprocess/process.py
, you mentioned using Jieba for tokenizing -zh words but I don't see it implemented there. Could you help clarify?