Loaded detection model vikp/surya_det2 on device cuda with dtype torch.float16
Loaded detection model vikp/surya_layout2 on device cuda with dtype torch.float16
Loaded reading order model vikp/surya_order on device cuda with dtype torch.float16
Loaded recognition model vikp/surya_rec on device cuda with dtype torch.float16
Loaded texify model to cuda with torch.float16 dtype
Converting 80 pdfs in chunk 1/1 with 8 processes, and storing in ./markdowns_output
Processing PDFs: 0%| | 0/80 [00:00<?, ?pdf/s]
run nvidia-smi , Only GPU 0 gets utilized (99%). The other 7 just have 3 MiB of memory usage, but no utilization and no processes are tied to them.
I follow the readme run this code
Console output, after running the above command:
run nvidia-smi , Only GPU 0 gets utilized (99%). The other 7 just have 3 MiB of memory usage, but no utilization and no processes are tied to them.
I also ref #136 , i use marker_chunk_convert, it not works.