szc19990412 / TransMIL

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification
363 stars 74 forks source link

Running TransMIL training on CPU #45

Open tymsoncyferki opened 7 months ago

tymsoncyferki commented 7 months ago

Hi, if anybody was wondering how to run the model on cpu (for smaller datasets such as Bisque Breast Cancer cpu is sufficient), those are the changes I have made:

In train.py:

  1. Change default gpus argument value
    parser.add_argument('--gpus', default = [])
  2. Delete precision and gpus arguments from trainer
    trainer = Trainer(
      num_sanity_val_steps=0, 
      logger=cfg.load_loggers,
      callbacks=cfg.callbacks,
      max_epochs=cfg.General.epochs,
      amp_level=cfg.General.amp_level,
      accumulate_grad_batches=cfg.General.grad_acc,
      deterministic=True,
      check_val_every_n_epoch=1,
    )

    In TransMIL.py:

  3. Everywhere where cuda is used just delete .cuda() e.g.:
    # cls_tokens = self.cls_token.expand(B, -1, -1).cuda()
    cls_tokens = self.cls_token.expand(B, -1, -1)

    In terminal run training without gpus argument:

    python train.py --stage='train' --config='Bisque/TransMIL.yaml' --fold=0