Helsinki-NLP / Opus-MT

Open neural machine translation models and web services
MIT License
574 stars 71 forks source link

Added a GPU Dockerfile to enable GPU-accelerated containers for faster inference. #66

Closed martin-kirilov closed 1 year ago

martin-kirilov commented 1 year ago

I've implemented and tested a GPU-based Dockerfile. It uses CUDA 11.3.0 and MarianNMT 1.10.0.