Closed Tadusko closed 3 years ago
I don't think the GPU models will work simultaneously, but you can now switch spacy's CPU/GPU mode back and forth, which wasn't possible in spacy v2. You need to upgrade to the very latest thinc (v8.0.2) which has a bug fix related to torch for require_cpu
.
import spacy
import spacy_stanza
stanza_nlp = spacy_stanza.load_pipeline("en")
# load spacy pipeline on GPU
spacy.require_gpu()
spacy_nlp = spacy.load("en_core_web_trf")
doc1 = spacy_nlp("stuff")
# switch back to CPU defaults
spacy.require_cpu()
doc2 = stanza_nlp("other stuff")
spacy models loaded on GPU stay on GPU, but ones that involve torch (all the transformer-based models) will only work if require_gpu
is the current state. Ones that just use cupy (the non-trf ones) will keep working on GPU even if you call require_cpu
at a later point.
Thank you for the answer! CPU/GPU switching should come in handy.
If either
spacy.prefer_gpu()
or.require_gpu()
are called anytime a Stanza pipeline is/will be loaded on the GPU, the successive pipeline runs will fail.Is there any way to circumvent this or should one of the pipelines be on the CPU if the two need to be loaded at the same time?
How to reproduce the behaviour
Only Stanza pipeline seems to be affected, since loading in and running Spacy pipelines appears to work normally. E.g.
I used the Finnish Stanza pipeline since I have that downloaded, but the same issue has been reported previously for other languages as well in Stanza's issues.
Info about spaCy