-
Can it be serviced in non-English speaking countries?
For example, can artificial intelligence search be performed on Korean data?
I wonder if Tokenizer is needed separately for multilingual support…
-
I have seen other issues where you have suggested using the [moses tokenizer](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl). Unfortunately, I am tokenizing ko…
-
> Japanese newspaper Nihon Keizai Shimbun reported that the three giants plan to integrate their cargo computers and ground - cargo and air - cargo systems.
This is a sentence from the LDC95T7 l…
-
## Feature description
Add zh models trained on OntoNotes v5 Chinese.
## Could the feature be a [custom component](https://spacy.io/usage/processing-pipelines#custom-components) or [spaCy plugin](…
-
## Description of bug / unexpected behavior
trying to set up logging in a config file fails
## Expected behavior
logs written to a file
## How to reproduce the issue
Code for reproduci…
-
The first task is based on the code at https://github.com/NVIDIA/sentiment-discovery/ .
Currently, this code works on the command line only, but we need to adapt it into a python function. The fun…
-
Use this to open other questions or issues, and provide context here.
Is there any work going on to use Blenderbot for dialogue generation for other languages like Korean and Japanese?
I am trying…
-
## Bug Report
**Current Behavior**
Changing the forum color renders the site unusable
**Steps to Reproduce**
1. Go to 'Administration'
2. Click on 'Appearance'
3. Scroll down to 'Colors'
4.…
-
hi, I'm trying to use the spacy library and it requires natto-py support to run tokernzie or other pipelines. When I try tokenize the sentece in Korean it throws following error.
```python
import …
-
as we had issue with Chinese (and Irina said also Russian, Ukrainian, Thai, Vietnamese), let's test all language in Target or Origin texts. so far, I did all languages starting with A, B and C.