facebookresearch / nougat

Implementation of Nougat Neural Optical Understanding for Academic Documents
https://facebookresearch.github.io/nougat/
MIT License
8.98k stars 567 forks source link

ValueError: batch_size should be a positive integer value, but got batch_size=0 #135

Closed maria-mh07 closed 1 year ago

maria-mh07 commented 1 year ago

I´m using API mode, "app.py" script. The error comes from the "default_batch_size()" function, because torch.cuda.is_available()=True but GPU VRAM is too small and batch_size is set to zero.

def default_batch_size():
    if torch.cuda.is_available():
        batch_size = int(
            torch.cuda.get_device_properties(0).total_memory / 1024 / 1024 / 1000 * 0.3
        )
        if batch_size == 0:
            logging.warning("GPU VRAM is too small. Computing on CPU.")
    elif torch.backends.mps.is_available():
        # I don't know if there's an equivalent API so heuristically choosing bs=4
        batch_size = 4
    else:
        # don't know what a good value is here. Would not recommend to run on CPU
        batch_size = 1
        logging.warning("No GPU found. Conversion on CPU is very slow.")
    return batch_size
lukas-blecher commented 1 year ago

should be fixed now, thanks (feel free to reopen if not)