opendatalab / PDF-Extract-Kit

A Comprehensive Toolkit for High-Quality PDF Content Extraction
GNU Affero General Public License v3.0
4.8k stars 327 forks source link

Issues with CUDA OOM #134

Open Sachin-Bhat opened 2 days ago

Sachin-Bhat commented 2 days ago

Hello there,

Firstly I would like to thank you for creating this tool.I am getting the following error with CUDA and was hoping I could get some assistance in debugging the issue. I am running a RTX 3060 GPU on my Laptop and I am currently unsure if my device meets the minimum requirements to run on the GPU.

I am currently running Artix Linux in case the distro information is needed.

Namespace(pdf='1706.03762.pdf', output='output', batch_size=128, vis=False, render=False)
2024-09-20 11:06:26
Started!
CustomVisionEncoderDecoderModel init
CustomMBartForCausalLM init
CustomMBartDecoder init
[09/20 11:06:39 detectron2]: Rank of current process: 0. World size: 1
[09/20 11:06:39 detectron2]: Environment info:
-------------------------------  ---------------------------------------------------------------------------------------
sys.platform                     linux
Python                           3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
numpy                            1.26.4
detectron2                       0.6 @/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/detectron2
Compiler                         GCC 11.4
CUDA compiler                    not available
DETECTRON2_ENV_MODULE            <not set>
PyTorch                          2.3.1+cu121 @/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch
PyTorch debug build              False
torch._C._GLIBCXX_USE_CXX11_ABI  False
GPU available                    Yes
GPU 0                            NVIDIA GeForce RTX 3060 Laptop GPU (arch=8.6)
Driver version                   560.35.03
CUDA_HOME                        /opt/cuda
Pillow                           8.4.0
torchvision                      0.18.1+cu121 @/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torchvision
torchvision arch flags           5.0, 6.0, 7.0, 7.5, 8.0, 8.6, 9.0
fvcore                           0.1.5.post20221221
iopath                           0.1.9
cv2                              4.6.0
-------------------------------  ---------------------------------------------------------------------------------------
PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

[09/20 11:06:39 detectron2]: Command line arguments: {'config_file': 'modules/layoutlmv3/layoutlmv3_base_inference.yaml', 'resume': False, 'eval_only': False, 'num_gpus': 1, 'num_machines': 1, 'machine_rank': 0, 'dist_url': 'tcp://127.0.0.1:57823', 'opts': ['MODEL.WEIGHTS', 'models/Layout/model_final.pth']}
[09/20 11:06:39 detectron2]: Contents of args.config_file=modules/layoutlmv3/layoutlmv3_base_inference.yaml:
AUG:
  DETR: true
CACHE_DIR: ~/cache/huggingface
CUDNN_BENCHMARK: false
DATALOADER:
  ASPECT_RATIO_GROUPING: true
  FILTER_EMPTY_ANNOTATIONS: false
  NUM_WORKERS: 4
  REPEAT_THRESHOLD: 0.0
  SAMPLER_TRAIN: TrainingSampler
DATASETS:
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
  PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
  PROPOSAL_FILES_TEST: []
  PROPOSAL_FILES_TRAIN: []
  TEST:
  - scihub_train
  TRAIN:
  - scihub_train
GLOBAL:
  HACK: 1.0
ICDAR_DATA_DIR_TEST: ''
ICDAR_DATA_DIR_TRAIN: ''
INPUT:
  CROP:
    ENABLED: true
    SIZE:
    - 384
    - 600
    TYPE: absolute_range
  FORMAT: RGB
  MASK_FORMAT: polygon
  MAX_SIZE_TEST: 1333
  MAX_SIZE_TRAIN: 1333
  MIN_SIZE_TEST: 800
  MIN_SIZE_TRAIN:
  - 480
  - 512
  - 544
  - 576
  - 608
  - 640
  - 672
  - 704
  - 736
  - 768
  - 800
  MIN_SIZE_TRAIN_SAMPLING: choice
  RANDOM_FLIP: horizontal
MODEL:
  ANCHOR_GENERATOR:
    ANGLES:
    - - -90
      - 0
      - 90
    ASPECT_RATIOS:
    - - 0.5
      - 1.0
      - 2.0
    NAME: DefaultAnchorGenerator
    OFFSET: 0.0
    SIZES:
    - - 32
    - - 64
    - - 128
    - - 256
    - - 512
  BACKBONE:
    FREEZE_AT: 2
    NAME: build_vit_fpn_backbone
  CONFIG_PATH: ''
  DEVICE: cuda
  FPN:
    FUSE_TYPE: sum
    IN_FEATURES:
    - layer3
    - layer5
    - layer7
    - layer11
    NORM: ''
    OUT_CHANNELS: 256
  IMAGE_ONLY: true
  KEYPOINT_ON: false
  LOAD_PROPOSALS: false
  MASK_ON: true
  META_ARCHITECTURE: VLGeneralizedRCNN
  PANOPTIC_FPN:
    COMBINE:
      ENABLED: true
      INSTANCES_CONFIDENCE_THRESH: 0.5
      OVERLAP_THRESH: 0.5
      STUFF_AREA_LIMIT: 4096
    INSTANCE_LOSS_WEIGHT: 1.0
  PIXEL_MEAN:
  - 127.5
  - 127.5
  - 127.5
  PIXEL_STD:
  - 127.5
  - 127.5
  - 127.5
  PROPOSAL_GENERATOR:
    MIN_SIZE: 0
    NAME: RPN
  RESNETS:
    DEFORM_MODULATED: false
    DEFORM_NUM_GROUPS: 1
    DEFORM_ON_PER_STAGE:
    - false
    - false
    - false
    - false
    DEPTH: 50
    NORM: FrozenBN
    NUM_GROUPS: 1
    OUT_FEATURES:
    - res4
    RES2_OUT_CHANNELS: 256
    RES5_DILATION: 1
    STEM_OUT_CHANNELS: 64
    STRIDE_IN_1X1: true
    WIDTH_PER_GROUP: 64
  RETINANET:
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_WEIGHTS:
    - 1.0
    - 1.0
    - 1.0
    - 1.0
    FOCAL_LOSS_ALPHA: 0.25
    FOCAL_LOSS_GAMMA: 2.0
    IN_FEATURES:
    - p3
    - p4
    - p5
    - p6
    - p7
    IOU_LABELS:
    - 0
    - -1
    - 1
    IOU_THRESHOLDS:
    - 0.4
    - 0.5
    NMS_THRESH_TEST: 0.5
    NORM: ''
    NUM_CLASSES: 10
    NUM_CONVS: 4
    PRIOR_PROB: 0.01
    SCORE_THRESH_TEST: 0.05
    SMOOTH_L1_LOSS_BETA: 0.1
    TOPK_CANDIDATES_TEST: 1000
  ROI_BOX_CASCADE_HEAD:
    BBOX_REG_WEIGHTS:
    - - 10.0
      - 10.0
      - 5.0
      - 5.0
    - - 20.0
      - 20.0
      - 10.0
      - 10.0
    - - 30.0
      - 30.0
      - 15.0
      - 15.0
    IOUS:
    - 0.5
    - 0.6
    - 0.7
  ROI_BOX_HEAD:
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS:
    - 10.0
    - 10.0
    - 5.0
    - 5.0
    CLS_AGNOSTIC_BBOX_REG: true
    CONV_DIM: 256
    FC_DIM: 1024
    NAME: FastRCNNConvFCHead
    NORM: ''
    NUM_CONV: 0
    NUM_FC: 2
    POOLER_RESOLUTION: 7
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
    SMOOTH_L1_BETA: 0.0
    TRAIN_ON_PRED_BOXES: false
  ROI_HEADS:
    BATCH_SIZE_PER_IMAGE: 512
    IN_FEATURES:
    - p2
    - p3
    - p4
    - p5
    IOU_LABELS:
    - 0
    - 1
    IOU_THRESHOLDS:
    - 0.5
    NAME: CascadeROIHeads
    NMS_THRESH_TEST: 0.5
    NUM_CLASSES: 10
    POSITIVE_FRACTION: 0.25
    PROPOSAL_APPEND_GT: true
    SCORE_THRESH_TEST: 0.05
  ROI_KEYPOINT_HEAD:
    CONV_DIMS:
    - 512
    - 512
    - 512
    - 512
    - 512
    - 512
    - 512
    - 512
    LOSS_WEIGHT: 1.0
    MIN_KEYPOINTS_PER_IMAGE: 1
    NAME: KRCNNConvDeconvUpsampleHead
    NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: true
    NUM_KEYPOINTS: 17
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  ROI_MASK_HEAD:
    CLS_AGNOSTIC_MASK: false
    CONV_DIM: 256
    NAME: MaskRCNNConvUpsampleHead
    NORM: ''
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  RPN:
    BATCH_SIZE_PER_IMAGE: 256
    BBOX_REG_LOSS_TYPE: smooth_l1
    BBOX_REG_LOSS_WEIGHT: 1.0
    BBOX_REG_WEIGHTS:
    - 1.0
    - 1.0
    - 1.0
    - 1.0
    BOUNDARY_THRESH: -1
    CONV_DIMS:
    - -1
    HEAD_NAME: StandardRPNHead
    IN_FEATURES:
    - p2
    - p3
    - p4
    - p5
    - p6
    IOU_LABELS:
    - 0
    - -1
    - 1
    IOU_THRESHOLDS:
    - 0.3
    - 0.7
    LOSS_WEIGHT: 1.0
    NMS_THRESH: 0.7
    POSITIVE_FRACTION: 0.5
    POST_NMS_TOPK_TEST: 1000
    POST_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 1000
    PRE_NMS_TOPK_TRAIN: 2000
    SMOOTH_L1_BETA: 0.0
  SEM_SEG_HEAD:
    COMMON_STRIDE: 4
    CONVS_DIM: 128
    IGNORE_VALUE: 255
    IN_FEATURES:
    - p2
    - p3
    - p4
    - p5
    LOSS_WEIGHT: 1.0
    NAME: SemSegFPNHead
    NORM: GN
    NUM_CLASSES: 10
  VIT:
    DROP_PATH: 0.1
    IMG_SIZE:
    - 224
    - 224
    NAME: layoutlmv3_base
    OUT_FEATURES:
    - layer3
    - layer5
    - layer7
    - layer11
    POS_TYPE: abs
  WEIGHTS: 
OUTPUT_DIR: 
SCIHUB_DATA_DIR_TRAIN: ~/publaynet/layout_scihub/train
SEED: 42
SOLVER:
  AMP:
    ENABLED: true
  BACKBONE_MULTIPLIER: 1.0
  BASE_LR: 0.0002
  BIAS_LR_FACTOR: 1.0
  CHECKPOINT_PERIOD: 2000
  CLIP_GRADIENTS:
    CLIP_TYPE: full_model
    CLIP_VALUE: 1.0
    ENABLED: true
    NORM_TYPE: 2.0
  GAMMA: 0.1
  GRADIENT_ACCUMULATION_STEPS: 1
  IMS_PER_BATCH: 32
  LR_SCHEDULER_NAME: WarmupCosineLR
  MAX_ITER: 20000
  MOMENTUM: 0.9
  NESTEROV: false
  OPTIMIZER: ADAMW
  REFERENCE_WORLD_SIZE: 0
  STEPS:
  - 10000
  WARMUP_FACTOR: 0.01
  WARMUP_ITERS: 333
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.05
  WEIGHT_DECAY_BIAS: null
  WEIGHT_DECAY_NORM: 0.0
TEST:
  AUG:
    ENABLED: false
    FLIP: true
    MAX_SIZE: 4000
    MIN_SIZES:
    - 400
    - 500
    - 600
    - 700
    - 800
    - 900
    - 1000
    - 1100
    - 1200
  DETECTIONS_PER_IMAGE: 100
  EVAL_PERIOD: 1000
  EXPECTED_RESULTS: []
  KEYPOINT_OKS_SIGMAS: []
  PRECISE_BN:
    ENABLED: false
    NUM_ITER: 200
VERSION: 2
VIS_PERIOD: 0

[09/20 11:06:41 d2.checkpoint.detection_checkpoint]: [DetectionCheckpointer] Loading from models/Layout/model_final.pth ...
[09/20 11:06:41 fvcore.common.checkpoint]: [Checkpointer] Loading from models/Layout/model_final.pth ...
[2024/09/20 11:06:41] ppocr DEBUG: Namespace(help='==SUPPRESS==', use_gpu=True, use_xpu=False, use_npu=False, ir_optim=True, use_tensorrt=False, min_subgraph_size=15, precision='fp32', gpu_mem=500, gpu_id=0, image_dir=None, page_num=0, det_algorithm='DB', det_model_dir='/home/sachin/.paddleocr/whl/det/ch/ch_PP-OCRv4_det_infer', det_limit_side_len=960, det_limit_type='max', det_box_type='quad', det_db_thresh=0.3, det_db_box_thresh=0.3, det_db_unclip_ratio=1.5, max_batch_size=10, use_dilation=False, det_db_score_mode='fast', det_east_score_thresh=0.8, det_east_cover_thresh=0.1, det_east_nms_thresh=0.2, det_sast_score_thresh=0.5, det_sast_nms_thresh=0.2, det_pse_thresh=0, det_pse_box_thresh=0.85, det_pse_min_area=16, det_pse_scale=1, scales=[8, 16, 32], alpha=1.0, beta=1.0, fourier_degree=5, rec_algorithm='SVTR_LCNet', rec_model_dir='/home/sachin/.paddleocr/whl/rec/ch/ch_PP-OCRv4_rec_infer', rec_image_inverse=True, rec_image_shape='3, 48, 320', rec_batch_num=6, max_text_length=25, rec_char_dict_path='/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/paddleocr/ppocr/utils/ppocr_keys_v1.txt', use_space_char=True, vis_font_path='./doc/fonts/simfang.ttf', drop_score=0.5, e2e_algorithm='PGNet', e2e_model_dir=None, e2e_limit_side_len=768, e2e_limit_type='max', e2e_pgnet_score_thresh=0.5, e2e_char_dict_path='./ppocr/utils/ic15_dict.txt', e2e_pgnet_valid_set='totaltext', e2e_pgnet_mode='fast', use_angle_cls=False, cls_model_dir='/home/sachin/.paddleocr/whl/cls/ch_ppocr_mobile_v2.0_cls_infer', cls_image_shape='3, 48, 192', label_list=['0', '180'], cls_batch_num=6, cls_thresh=0.9, enable_mkldnn=False, cpu_threads=10, use_pdserving=False, warmup=False, sr_model_dir=None, sr_image_shape='3, 32, 128', sr_batch_num=1, draw_img_save_dir='./inference_results', save_crop_res=False, crop_res_save_dir='./output', use_mp=False, total_process_num=1, process_id=0, benchmark=False, save_log_path='./log_output/', show_log=True, use_onnx=False, output='./output', table_max_len=488, table_algorithm='TableAttn', table_model_dir=None, merge_no_span_structure=True, table_char_dict_path=None, layout_model_dir=None, layout_dict_path=None, layout_score_threshold=0.5, layout_nms_threshold=0.5, kie_algorithm='LayoutXLM', ser_model_dir=None, re_model_dir=None, use_visual_backbone=True, ser_dict_path='../train_data/XFUND/class_list_xfun.txt', ocr_order_method=None, mode='structure', image_orientation=False, layout=True, table=True, ocr=True, recovery=False, use_pdf2docx_api=False, invert=False, binarize=False, alphacolor=(255, 255, 255), lang='ch', det=True, rec=True, type='ocr', ocr_version='PP-OCRv4', structure_version='PP-StructureV2')
2024-09-20 11:06:26
Model init done!
total files: 1
pdf index: 0 pages: 15
Traceback (most recent call last):
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/pdf_extract.py", line 131, in <module>
    layout_res = layout_model(image, ignore_catids=[])
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/model_init.py", line 124, in __call__
    outputs = self.predictor(image)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/detectron2/engine/defaults.py", line 319, in __call__
    predictions = self.model([inputs])[0]
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/rcnn_vl.py", line 55, in forward
    return self.inference(batched_inputs)
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/rcnn_vl.py", line 113, in inference
    features = self.backbone(input)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/detectron2/modeling/backbone/fpn.py", line 139, in forward
    bottom_up_features = self.bottom_up(x)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/backbone.py", line 106, in forward
    return self.backbone.forward(
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py", line 949, in forward
    encoder_outputs = self.encoder(
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py", line 647, in forward
    layer_outputs = layer_module(
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py", line 435, in forward
    self_attention_outputs = self.attention(
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py", line 394, in forward
    self_outputs = self.self(
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sachin/miniforge3/envs/pdf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py", line 335, in forward
    attention_probs = self.cogview_attn(attention_scores)  # to stablize training
  File "/home/sachin/Documents/FYP/pdf-extract-kit/content/PDF-Extract-Kit/modules/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py", line 271, in cogview_attn
    new_attention_scores = (scaled_attention_scores - max_value) * alpha
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 500.00 MiB. GPU 

Once again any assistance on this would be greatly appreciated.

Cheers, Sachin

wangbinDL commented 1 day ago

Hello @Sachin-Bhat

Your RTX 3060 GPU might be running out of memory due to the high batch size during inference. To resolve this, try reducing the batch size using the --batch-size parameter.

For example, start with a batch size of 16:

python pdf_extract.py --pdf assets/examples/example.pdf --batch-size 16

If you still encounter memory issues, further reduce the batch size:

python pdf_extract.py --pdf assets/examples/example.pdf --batch-size 8
python pdf_extract.py --pdf assets/examples/example.pdf --batch-size 4

This should help manage the GPU memory usage better.

Sachin-Bhat commented 1 day ago

Hey @wangbinDL

Thank you for the prompt response. I did try what you said. However even setting the batch size to 1 did not fix the issues. So I am unsure if this is a bug. Please let me know if I can try something else to work around this issue.

Cheers, Sachin

myhloli commented 1 day ago

Hey @wangbinDL

Thank you for the prompt response. I did try what you said. However even setting the batch size to 1 did not fix the issues. So I am unsure if this is a bug. Please let me know if I can try something else to work around this issue.

Cheers,

Sachin

Maybe you need more than 8GB vram to run this project. image