Closed pjbull closed 6 years ago
How often does radiologists spot error in patient's files/data and what kind of tool do they currently using to validate data and/or fix error? What kind/piece of data are most prone to error? (Looking at data provided in the RSNA specifically)
In the Report/Export view, would clinicians prefer to have as much data cramped into the view-port as possible (for quick scanning, for example) in a way similar to infographic report example or would they prefer spacial line-by-line document format example? (Specifically talking about the RSNA standard)
Is there a process to update a patient's record once it has been saved? If not, how long does it take to update patient record? If instantly, how long does it take to validate if the new data is correct? And how long does it take until patient's treatment get updated according to their new status?
NOTE: The 2nd question targets specifically for digital viewing. Exported file will be formatted to follow the formal convention.
I have some pretty obvious questions:
If you could wish for a piece of software that could help you detecting lung cancer nodules, what would be its most useful features and how would it be different from any possible "competitors" that already exist?
What would be the most important properties such a software has to fulfill such that you would use it on a regular basis (intuitive user interface, responsive, high accuracy, ...?)
I have some here too:
What would you like improved/ changed in your current workflow using tools with standard PACS software package with the current workflow in the concept-to-clinic project?
In the lung cancer screening report, is there information in the standard template that is missing and you would want in the report generated by concept-to-clinic project (or omitted)? If there is, which info and why?
In the event the tool being built would allow multiple algorithm results and reports were based on a given algorithm, how would that impact the final decision a clinician makes about the cancer detection?
@louisgv @WGierke @musale Thanks for the questions! We'll pass them along and surface interesting responses when we get them.
I wonder if clinicians would appreciate having 1) an interactive tool that allows feedback cycle e.g. to indicate their hypothesis about a potential mass-detection, the procedures intended for the patient and the outcome of it such that all the information could update the detection algorithm(s) or at the minimum document any outlier situations. 2) a tool that allows for comparison between current patient CT scan info. e.g. with various (coronal/sagittal/axial) views zoomed to possible detection location(s) and an averaged across some 'normalized' healthy population's data of similar location(s) -- would that be helpful?
Additionally, I wonder what are general heuristics employed by Radiologists when they go about scanning the images for potential anomalies -- e.g. is it a top-down approach based on medical notes, and if so perhaps the algorithms might need to be in tune with sifting out some of this information.
@hengrumay (I'd love to award you some points for your contribution but you don't seem to have signed up to the competition!)
@lamby I just signed up -- I hope it got through. Either way no worries. Just trying to be helpful! Thanks!
Most CAD systems in clinical use today have their internal threshold set to operate somewhere between 1 to 4 false positives per scan on average. Some systems allow the user to vary the threshold. To make the task more challenging, we included low false positive rates in our evaluation. This determines if a system can also identify a significant percentage of nodules with very few false alarms, as might be needed for CAD algorithms that operate more or less autonomously.
- Based on the quote from the LUNA16 evaluation page, are there some changes in preference, i.e. which amount of false positives per scan (FPPS) should treat as appropriate? Also, I want point to the fact that sensitivities which correspond to lower FPPS are less stable. It can be observed by adding trivial test time augmentation (TTA), e.g. flip along some axis. This trick allows raising a tail and also reducing the variance of FROC without any other manipulations with an algorithm itself. Following plots, without TTA and with TTA accordingly:
A couple of questions to inform the work to help radiologists and patients with better software:
Given that search, detection and classification of lung nodules are one part of a pipeline of clinical tasks in the radiological evaluation of at-risk lung cancer patients’ chest CT scans, what are 3 key attributes that you would favor in a lung nodule diagnosis system that integrates with your clinical software and your related clinical workflows? for example high accuracy (AUCROC), speed, operational software cost, ease of use, description of classification rationale for each read, incorporation of additional clinical tasks beyond detection/diagnosis etc
What are the key challenges you see to implementation of a lung nodule diagnosis in an actual clinical environment? Culture?, system cost? Accuracy? Quality scientific data or studies? Something else?
What type of data and evidence would you require in order to actually use a lung nodule diagnosis program in your clinical practice - a prospective clinical trial, a minimal level of accuracy etc?
Hi, How can I sign up for the competition?
@kyounis The competition is no longer running.
This project is about making something useful for clinicians. This is your chance to pull up and get input from radiologists working on the front lines of early detection. We would love to hear what you are wondering about in terms of the workflow at the clinic, and where user input can address thoughts that are coming up for you as you work on the application.
The team at ALCF will be coordinating responses to submitted questions through their network of clinicians, researchers, and patients.
Submit a question by commenting directly on this issue. You'll earn 2 community points for each of up to three original, substantive questions.