medtorch / Q-Aid-Core

An intuitive platform for deploying the latest discoveries in healthcare AI to everybody's phones. Powered by PyTorch!
MIT License
11 stars 5 forks source link

Medical MVP: VQA+Captum demo #45

Open andreimano opened 4 years ago

andreimano commented 4 years ago

See #44 for more details about the dataset and network architecture.

The task is:

  1. Train a baseline VQA model with decent accuracy. Try using tools from MONAI as much as possible. (related to #40)
  2. Use Captum for interpretability. (related to #35)

Nice to have:

  1. Federalize the model and do the same Captum demo. (depends on #41)
  2. Find a stronger architecture (maybe based on transformers) and make a demo with it.
andreimano commented 4 years ago

The authors of 1 suggest that 2 gives good results. The code is publicly available at 3, but the requirements are pretty high - it requires 64GB of memory (I'm assuming video) and 4 GPUs, according to the authors.

Where are we going to train our models? @bcebere @tudorcebere

tudorcebere commented 4 years ago

I can manage to provide access to a 2080ti and when it is really critical, I can request access to a tesla v100. I don't think I can get something close to 64VRAM tho.

andreimano commented 4 years ago

I was thinking about the AWS free credit that was listed on the hackaton page - I think that we could also use the Google Compute Engine 300$ free credit 1 if we wanna train something that needs 64Gb of VRAM - unfortunately I've already used my free GCE credit. I've opened a ticket for this (#46).

This issue also depends on #46 .