I'm currently using the NER Nuget package with good results - but I'm supposed to handle Chinese texts as well.
The official site for NER says
We also provide Chinese models built from the Ontonotes Chinese named entity data. There are two models, one using distributional similarity clusters and one without. These are designed to be run on word-segmented Chinese. So, if you want to use these on normal Chinese text, you will first need to run Stanford Word Segmenter or some other Chinese word segmenter, and then run NER on the output of that!
As far as I can tell, the Segmenter Nuget package is incompatible with the NER one. Do I have to use the NLP one then?
I'm asking, because I was informed that the different packages require different license deals in case you want to use them commercially - and I am not interested in any NLP features other than NER (and - it seems as a dependency - segmentation for Chinese).
I'm currently using the NER Nuget package with good results - but I'm supposed to handle Chinese texts as well.
The official site for NER says
As far as I can tell, the Segmenter Nuget package is incompatible with the NER one. Do I have to use the NLP one then?
I'm asking, because I was informed that the different packages require different license deals in case you want to use them commercially - and I am not interested in any NLP features other than NER (and - it seems as a dependency - segmentation for Chinese).