Open 123LiVo321 opened 5 months ago
I have the same issue!
I have the same question like 'Producer process tried to deallocate over 1000 memory blocks referred by consumer processes. Deallocation might be significantly slowed down. We assume it will never going to be the case, but if it is, please file but to https://github.com/pytorch/pytorch'. But it seems that it will not affect the operation and will not explode the memory.
For the second question, I think the model has been downloaded to your own pc or server. By default the cache path is C:/User/.cache/huggingface (windows).
have you solved it?
Then see in pip list that 'torch' is already installed and thought everything is set&done... there was a torch, but.... yeah...but.... ...so... When I finally properly read the install instructions, visit the https://pytorch.org/get-started/locally/ , deleted the whole virtual and make brand new with install torch !!WITH CUDA!! properly, voila :
(marker) d:\marker>echo %TORCH_DEVICE% cuda
(marker) d:\marker>echo %INFERENCE_RAM% 16
(marker) d:\marker>marker 1-input 2-output Loaded detection model vikp/surya_det2 on device cuda with dtype torch.float16 Loaded detection model vikp/surya_layout2 on device cuda with dtype torch.float16 Loaded reading order model vikp/surya_order on device cuda with dtype torch.float16
the log continues :
Loaded recognition model vikp/suryarec on device cuda with dtype torch.float16 Loaded texify model to cuda with torch.float16 dtype Converting 1 pdfs in chunk 1/1 with 1 processes, and storing in d:\marker\2-output Detecting bboxes: 100%|██████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.17s/it] C:\Users\\anaconda3\envs\marker\Lib\site-packages\surya\postprocessing\affinity.py:28: RuntimeWarning: invalid value encountered in divide scaled_sobel = np.uint8(255 abs_sobelx / np.max(abssobelx)) C:\Users\\anaconda3\envs\marker\Lib\site-packages\surya\postprocessing\affinity.py:28: RuntimeWarning: invalid value encountered in cast scaled_sobel = np.uint8(255 abs_sobelx / np.max(abs_sobelx)) Could not extract any text blocks for d:\marker\1-input\Contributing.pdf Empty file: d:\marker\1-input\Contributing.pdf. Could not convert. Processing PDFs: 100%|██████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.73s/pdf] [W CudaIPCTypes.cpp:96] Producer process tried to deallocate over 1000 memory blocks referred by consumer processes. Deallocation might be significantly slowed down. We assume it will never going to be the case, but if it is, please file but to https://github.com/pytorch/pytorch [W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
When discussing it with the brand new chatGPT it suggests me tampering with the code, which seems a bit strange and tbh foreign to me. I'm always able to run the project via the CPU if necessary, but it would be nice to use it as intended.
Would you mind to help me debugg?
p.s.: I also noticed that after re-install the project completely [git clone, new env] and first run - the models were not downloaded again. If they aren't in the project repo or in the conda env, where they are?
Originally posted by @123LiVo321 in https://github.com/VikParuchuri/marker/issues/160#issuecomment-2143579732