-
## How to reproduce the behaviour
Add a PyTorchWrapper component to a language pipeline and then do this:
```
nlp.to_disk("something")
nlp2 = spacy.load("something")
```
Motivating case is…
-
## How to reproduce the behaviour
On M1 any code, which uses spacy to parse a doc failing with (can only test on my laptop)
Works fine on linux machine
[1] 73089 bus error
upon exit. On…
-
Hi,
after training a model for a new language, I only have files ending with *_best_modeltoppairs, *_best_modelranking, *_best_modelallpairs. But `spacy.load("mymodel_best_modelranking")` does not …
tidoe updated
2 years ago
-
Hello,
It takes too long to parse the doc object, i.e to iterate over sentence and tokens in them. Is that expected ?
```
snlp = stanfordnlp.Pipeline(processors='tokenize,pos', models_dir=model_d…
-
Hello!
I am working to create a knowledge base using the latest (unfiltered) English wiki dumps. I've successfully followed the steps in [benchmarks/nel](https://github.com/explosion/projects/tree/…
-
```
TrainingFailedException: Internal Server Error: An unexpected error occurred during training. Error: You are using a pipeline template. All pipelines templates have been removed in 2.0. Please ad…
-
Hi there!
When I try to follow the pipeline steps laid out in the README *exactly*, I receive the following error at the preprocessing stage:
```
AttributeError: 'spacy.tokens.span.Span' object h…
-
Run multi stage pipeline for getting only very clean statements.
They should be clear, and make sense.
### Pipeline:
- [ ] Ask GPT to filter generally
- [ ] Filter for strange proper nouns
- [ ] …
-
**Describe the bug**
`spacy_stanza` allows users to get the output back in spaCy format, but also integrates a nifty multi-processing option through `nlp.pipe(data, n_process=4)`. This used to work w…
-
If you have a look at [all the attributes](https://spacy.io/api/token#attributes) that spaCy generates for their tokens then you can imagine that some of these features can be useful for machine learn…