microsoft / unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
https://aka.ms/GeneralAI
MIT License
19.75k stars 2.52k forks source link

System configuration for running abstractive summarization inference #66

Closed chandrasekharan98 closed 4 years ago

chandrasekharan98 commented 4 years ago

Hi there I'm trying to run the decoding for abstractive summarization (CNN/DM) on CPU mode after referring #23 .

I don't have a CUDA capable device and would like to know the required system configuration to run it. I'd like to know if there are other ways to run the decoding other than beam search too.

Thanks, M Chandrasekharan

donglixp commented 4 years ago

Decoding on GPUs would be much faster than CPU. The beam size can be set to 1 for greedy search.