TIO-IKIM / CellViT

CellViT: Vision Transformers for Precise Cell Segmentation and Classification
https://doi.org/10.1016/j.media.2024.103143
Other
189 stars 27 forks source link

When I use 3090 for prediction, it shows memory overflow。Model is CellViT-256. #3

Closed Transformer-man closed 1 year ago

Transformer-man commented 1 year ago

image

Transformer-man commented 1 year ago

How can I use a smaller memory

FabianHoerst commented 1 year ago

During training or Inference? The Code is optimized to run on a 48GB Nvidia Device. During Training, reduce the batch size. For inference, you need to reduce the inference batch size in cell_detection.py. However, we are currently improving the inference performance and will also add a batch-size parameter for smaller GPUs

wsi_inference_dataloader = DataLoader(
    dataset=wsi_inference_dataset,
    batch_size=8,
    num_workers=16,
    shuffle=False,
    collate_fn=wsi_inference_dataset.collate_batch,
    pin_memory=False,
)
FabianHoerst commented 1 year ago

Batch-size parameter has been added. Please check the new inference parameters in the cli. Using a size of 2 or 4 should work with your code