-
Rom 1:11 is misparsed: δέ should join its clause at a higher level than the preceding articular infinitive. Same with Rom 15:8.
-
Hi I tried to run your code with the training command "python compute_stats/compute_spatiotemp_stats_clean_train_swin.py" but it does not work. below is the last part of the display before stopping.
…
-
I think a key issue to discuss is how to make R text packages interoperable, so that new packages extend functionality rather than compete with one another, and so that objects created in one package …
-
Can somebody please guide me through the logic behind ducking? I am trying to search for the documentation. I want to add support for Indian languages and rewrite the logic for it in python. For me, t…
-
Heyo!
In most of my analyses I'm interested in generating whole-brain surrogates, but for obvious reasons we have to generate distance matrices separately for each hemisphere so the surrogates need…
-
I checked through the repo, and it seems that there is no documentation. Did I miss something? Perhaps you could provide some simple use case examples on the README so there is an idea of the kind of …
-
I try to parse the project [Stanford CoreNLP](https://github.com/stanfordnlp/CoreNLP/tree/master/src/edu/stanford/nlp) using JavaParser. I use version `3.13.6` of JavaParser. I get the following error…
-
The `e_key_usage_and_extended_key_usage_inconsistent` lint introduced in https://github.com/zmap/zlint/pull/497 (and refined in https://github.com/zmap/zlint/pull/528) had a lot of discussion about th…
-
The validator is complaining about a number of tokens that are tagged as `VERB` but attach as `mark`. These are for the connectives:
* concerning, depending, given, provided, regarding
Tagged as…
-
It would be great to add the org.apache.lucene.analysis for smarter tokenization for all languages. In this way, processing other languages such as Chinese is more sensible with your library.