Closed dtschleckser closed 6 months ago
I can confirm this works. I used nlp.pipe
to run on batches of texts on GPU
texts=<list of texts>
docs = list(nlp.pipe(texts, batch_size=128))
Thanks! I'll do some quick tests and merge this today.
Thanks for submitting this PR! I have merged.
This PR adds the map_location argument to the pipeline parameters and passes it to the model. This enables GPU processing if you pass an alternate torch device like cuda in.
Example use:
The map_location defaults to CPU, so it won't use the GPU unless explicitly specified. I've tested this change locally.
Also fixed a small typo in the requirements.txt.
Thanks!