drivendataorg / concept-to-clinic

ALCF Concept to Clinic Challenge
https://concepttoclinic.drivendata.org/
MIT License
367 stars 146 forks source link

Ask a Clinician! (add a question, get points) #230

Closed pjbull closed 6 years ago

pjbull commented 6 years ago

This project is about making something useful for clinicians. This is your chance to pull up and get input from radiologists working on the front lines of early detection. We would love to hear what you are wondering about in terms of the workflow at the clinic, and where user input can address thoughts that are coming up for you as you work on the application.

The team at ALCF will be coordinating responses to submitted questions through their network of clinicians, researchers, and patients.

Submit a question by commenting directly on this issue. You'll earn 2 community points for each of up to three original, substantive questions.

louisgv commented 6 years ago
  1. How often does radiologists spot error in patient's files/data and what kind of tool do they currently using to validate data and/or fix error? What kind/piece of data are most prone to error? (Looking at data provided in the RSNA specifically)

  2. In the Report/Export view, would clinicians prefer to have as much data cramped into the view-port as possible (for quick scanning, for example) in a way similar to infographic report example or would they prefer spacial line-by-line document format example? (Specifically talking about the RSNA standard)

  3. Is there a process to update a patient's record once it has been saved? If not, how long does it take to update patient record? If instantly, how long does it take to validate if the new data is correct? And how long does it take until patient's treatment get updated according to their new status?

NOTE: The 2nd question targets specifically for digital viewing. Exported file will be formatted to follow the formal convention.

WGierke commented 6 years ago

I have some pretty obvious questions:

  1. If you could wish for a piece of software that could help you detecting lung cancer nodules, what would be its most useful features and how would it be different from any possible "competitors" that already exist?

  2. What would be the most important properties such a software has to fulfill such that you would use it on a regular basis (intuitive user interface, responsive, high accuracy, ...?)

musale commented 6 years ago

I have some here too:

  1. What would you like improved/ changed in your current workflow using tools with standard PACS software package with the current workflow in the concept-to-clinic project?

  2. In the lung cancer screening report, is there information in the standard template that is missing and you would want in the report generated by concept-to-clinic project (or omitted)? If there is, which info and why?

  3. In the event the tool being built would allow multiple algorithm results and reports were based on a given algorithm, how would that impact the final decision a clinician makes about the cancer detection?

isms commented 6 years ago

@louisgv @WGierke @musale Thanks for the questions! We'll pass them along and surface interesting responses when we get them.

hengrumay commented 6 years ago

I wonder if clinicians would appreciate having 1) an interactive tool that allows feedback cycle e.g. to indicate their hypothesis about a potential mass-detection, the procedures intended for the patient and the outcome of it such that all the information could update the detection algorithm(s) or at the minimum document any outlier situations. 2) a tool that allows for comparison between current patient CT scan info. e.g. with various (coronal/sagittal/axial) views zoomed to possible detection location(s) and an averaged across some 'normalized' healthy population's data of similar location(s) -- would that be helpful?

Additionally, I wonder what are general heuristics employed by Radiologists when they go about scanning the images for potential anomalies -- e.g. is it a top-down approach based on medical notes, and if so perhaps the algorithms might need to be in tune with sifting out some of this information.

lamby commented 6 years ago

@hengrumay (I'd love to award you some points for your contribution but you don't seem to have signed up to the competition!)

hengrumay commented 6 years ago

@lamby I just signed up -- I hope it got through. Either way no worries. Just trying to be helpful! Thanks!

vessemer commented 6 years ago

Most CAD systems in clinical use today have their internal threshold set to operate somewhere between 1 to 4 false positives per scan on average. Some systems allow the user to vary the threshold. To make the task more challenging, we included low false positive rates in our evaluation. This determines if a system can also identify a significant percentage of nodules with very few false alarms, as might be needed for CAD algorithms that operate more or less autonomously.

  • Based on the quote from the LUNA16 evaluation page, are there some changes in preference, i.e. which amount of false positives per scan (FPPS) should treat as appropriate? Also, I want point to the fact that sensitivities which correspond to lower FPPS are less stable. It can be observed by adding trivial test time augmentation (TTA), e.g. flip along some axis. This trick allows raising a tail and also reducing the variance of FROC without any other manipulations with an algorithm itself. Following plots, without TTA and with TTA accordingly:

froc_noduleresnet froc_3dlrcnn

screenshot from 2018-01-23 01-32-52

screenshot from 2018-01-23 02-15-09

swarm-ai commented 6 years ago

A couple of questions to inform the work to help radiologists and patients with better software:

  1. Given that search, detection and classification of lung nodules are one part of a pipeline of clinical tasks in the radiological evaluation of at-risk lung cancer patients’ chest CT scans, what are 3 key attributes that you would favor in a lung nodule diagnosis system that integrates with your clinical software and your related clinical workflows? for example high accuracy (AUCROC), speed, operational software cost, ease of use, description of classification rationale for each read, incorporation of additional clinical tasks beyond detection/diagnosis etc

  2. What are the key challenges you see to implementation of a lung nodule diagnosis in an actual clinical environment? Culture?, system cost? Accuracy? Quality scientific data or studies? Something else?

  3. What type of data and evidence would you require in order to actually use a lung nodule diagnosis program in your clinical practice - a prospective clinical trial, a minimal level of accuracy etc?

eelvira commented 6 years ago
  1. Does it make sense to save false-positives as well to check their locations on the next timestamp of the patient's CT?
  2. What is the first thing radiologists notice if the pathologies are barely noticeable, and the algorithm did not mark these areas as probably pathological?
  3. Where is it planned to run the program: directly on the computers of doctors or remotely on the server? Whether it's worth to implement algorithms with worse performance, but if they are also less resource consuming?
kyounis commented 6 years ago

Hi, How can I sign up for the competition?

isms commented 6 years ago

@kyounis The competition is no longer running.