cutright / DVHA-MLCA

MLC Analyzer from DVHA as a Stand-Alone Script
Other
7 stars 3 forks source link

Error while running #1

Closed yograjyog closed 4 years ago

yograjyog commented 4 years ago

Hi Dan.. First of all thanks for the code. While running the code I got this error? Your help is highly appreciated.

Capture

`

cutright commented 4 years ago

Looks like shapely isn’t installed?

Follow-up: did you install with pip or manually with setup.py? Did you get any errors when installing?

SimonBiggs commented 4 years ago

For this one, try:

pip install shapely>=1.7.0

Also, Dan, I would recommend pinning shapely to greater than or equal to 1.7.0 in your dvh package and this one. In 1.7.0 they released wheels that make it installable on Windows without a world of pain. Putting that pin in there will make it "just work" and will help solve these sorts of issues I believe.

On Sat., 20 Jun. 2020, 2:00 am Dan Cutright, notifications@github.com wrote:

Looks like shapely isn’t installed?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cutright/DVHA-MLCA/issues/1#issuecomment-646712331, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABSBK62ZZ6PYCAC6KMGSCATRXODRPANCNFSM4OC2BX5Q .

cutright commented 4 years ago

@yograjyog Any luck?

@SimonBiggs I've experienced some issues installing shapely >1.6.4.post2 on some environments. Although, I don't recall If 1.7.0 was out by then or not. But my intent with this software is to build PyInstaller binaries to avoid these issues.

SimonBiggs commented 4 years ago

Have you seen the embedded Python distributions?

https://www.python.org/ftp/python/3.8.3/python-3.8.3-embed-amd64.zip (from https://www.python.org/downloads/windows/).

They can be combined with something like NSIS (https://nsis.sourceforge.io/Main_Page) to create installers in cases where pyinstaller can be quite a pain...

cutright commented 4 years ago

I have not, but unfortunately, I've already learned those pains. Seems to work pretty well with DVHA and DVHA DICOM Editor. But if you're pointing to a true installer, that could be interesting. I don't really like boot time with PyInstaller, particularly with Windows.

yograjyog commented 4 years ago

I have tried hard getting that work in windows. Then i shifted to Ubuntu.😀.. Now it is working.

cutright commented 4 years ago

Glad it's working, that's too bad about shapely on Windows. This has been a good resource for shapely wheels too, if the ones on PyPI haven't been working. I suspect if you were using the full Anaconda, you wouldn't have issues... personally I stay away from anything Anaconda, too bloated for me.

I've had luck with something like: pip install your_wheel_file.whl

When I get time, I'll whip up a simple GUI and create a Windows and macOS binary to avoid this issue.

This code could pretty easily be adapted to other plan metrics, if you have any suggestions. I could tweak the code a little bit to make it easy for outside contributors to add other metrics.

yograjyog commented 4 years ago

Thanks for your support. I will try your suggestion. Is it possile to have Modulation Complexity Score adapted for VMAT. Your work will be highly appreciated.

https://doi.org/10.1016/j.ejmp.2018.09.019

cutright commented 4 years ago

It should work for VMAT, at least it does on Pinnacle and Eclipse. Maybe you’re using Monaco and multi-arcs by chance? I think that’s the only beam type this code has issues with, which I’d be happy to accommodate with some anonymized examples.

cutright commented 4 years ago

Oh I see, I think this article has the formulas you are referring to?

Treatment plan complexity metrics for predicting IMRT pre-treatment quality assurance results S.B.Crowe• T.Kairn• J.Kenny• R.T.Knight• B. Hill • C. M. Langton • J. V. Trapp https://link.springer.com/content/pdf/10.1007/s13246-014-0274-9.pdf

yograjyog commented 4 years ago

Is that possible to have it within DVHA or MLCA?

cutright commented 4 years ago

For sure I'll add this to MLCA.

And worst case scenario, with respect to DVHA, it has a spreadsheet editor that lets you add new columns as well a copy and paste to/from Excel. These new columns get stored in .dvha session files too. That's how I generated my gamma pass-rate prediction models and TG-218 control charts.

Thanks for following up.

cutright commented 4 years ago

Looks like the work is already done? I came across this recently, but just noticed it’s the same author as the previous reference. https://github.com/sbcrowe/pyradlib/tree/master/radlib/tada

yograjyog commented 4 years ago

Oh that sounds great. Thanks a lot for your effort.

yograjyog commented 4 years ago

I am novice.Trying very hard to learn python. How to use that program. A basic idea is highly appreciated. Thanks in advance.

cutright commented 4 years ago

I'll have to get back to you on that, it looks to me the library doesn't have a user interface? Even command line? I'm not able to really look at it right now though.

But from a python console, you can use the functions in the complexity.py and pass the pydicom object (e.g., plan, beam, etc.) into it.

So when his code asks for plan or beam, I'm assuming something like the following would get you there? plan = pydicom.read_file('rtplan.dcm') beam = plan.BeamSequence[beam_index_of_interest]

yograjyog commented 4 years ago

Thanks a lot.

yograjyog commented 4 years ago

I have tried something like this

complexity score input

and got an output like this.

complexity score

Instead of printing values it is showing the memory location and value. Any suggestions? Your help is highly appreciated.

cutright commented 4 years ago

_metric_list is a dictionary of functions, but you'd like the values those functions return when passing in a pydicom object.

I think the intent is to do something like:

import pydicom
from complexity import plan_complexity
plan = pydicom.read_file('rtplan.dcm')
metrics_to_calc = ['MCS', 'MAD']  # fill this list with the calculations you want
results = plan_complexity(plan, metrics_to_calc )

Unfortunately that doesn't work. He's looking for a tag name of BeamLimitingDevices which I'm unfamiliar with and doesn't exist in my plan files. There is also some issues with pandas, perhaps the latest version breaks his code... or there are some other DICOM tag issues. Your best bet is to contact him in the meantime.

cutright commented 4 years ago

Got it! I think maybe pydicom had some changes since he wrote this? There's a good chance you could just downgrade your pydicom, but rather than guessing and checking, I changed attributes like BeamLimitingDevices to BeamLimitingDeviceSequence... similiarly for ControlPoints, BeamLimitingDevicePositions, etc.

Use this for complexity.py. Then do the following code.
(Edited by linking the code to a file, rather than posting in the comment)

import pydicom
from complexity import plan_complexity
ds = pydicom.read_file('rtplan.dcm')
results = plan_complexity(ds, ['MCS'])
print(results)
      Name   MU      Dose       MCS
1   01 CCW  236   0.58091   0.32281
2    02 CW  263   0.71527  0.267371
3  03 CCW2  208  0.528058  0.385578
4   04 CCW  431  0.954637  0.272984
5    05 CW  407  0.830187  0.229794
SimonBiggs commented 4 years ago

Probably worth making this a pull request...

On Sat., 27 Jun. 2020, 8:09 am Dan Cutright, notifications@github.com wrote:

Got it! I think maybe pydicom had some changes since he wrote this? There's a good chance you could just downgrade your pydicom, but rather than guessing and checking, I changed attributes like BeamLimitingDevices to BeamLimitingDeviceSequence... similiarly for ControlPoints, BeamLimitingDevicePositions, etc.

Use this for complexity.py. Then do the following code.

-- coding: utf-8 --

""" Treatment complexity assessment module..

This module provides radiotherapy treatment complexity assessment functionality. """

authorship information

author = 'Scott Crowe' email = 'sb.crowe@gmail.com' credits = ['Tanya Kairn'] license = "GPL3"

import required code

import pydicom import pandas as pd import numpy as np from PIL import Image from scipy.integrate import simps

define default parameters

_default_integration_limit = 1 _default_integration_epsilon = 0.05 _default_identifies = ['name', 'mu'] _default_metrics = ['MCS']

def plan_complexity(plan, metrics): """ Calculate the complexity of a treatment beam.

Args:
    plan (Dataset): The radiotherapy treatment plan.
    metrics (list): List of metrics to calculate.
"""
ref_beam_numbers = []
ref_beam_monitor_units = []
ref_beam_dose = []
for fraction_group in plan.FractionGroupSequence:
    for ref_beam in fraction_group.ReferencedBeamSequence:
        ref_beam_numbers.append(ref_beam.ReferencedBeamNumber)
        ref_beam_monitor_units.append(ref_beam.BeamMeterset)
        ref_beam_dose.append(ref_beam.BeamDose)
columns = ['Name', 'MU', 'Dose']
columns.extend(metrics)
data_frame = pd.DataFrame(index=ref_beam_numbers, columns=columns)
for beam in plan.BeamSequence:
    beamNumber = beam.BeamNumber
    if beamNumber in data_frame.index:
        result = [beam.BeamName,
                  ref_beam_monitor_units[ref_beam_numbers.index(beamNumber)],
                  ref_beam_dose[ref_beam_numbers.index(beamNumber)]]
        result.extend(beam_complexity(beam, metrics))
        data_frame.loc[beamNumber] = result
return data_frame

def beam_complexity(beam, metrics): """ Calculate the complexity of a treatment beam.

Args:
    beam (Dataset): The beam for the complexity metrics to be calculated.
    metrics (list): List of metrics to calculate.
"""
result = []
for metric in metrics:
    if metric in _default_metrics:
        result.append(_metric_list[metric](beam))
    elif 'SAS' in metric:
        aperture = float(metric.replace('SAS',''))
        result.append(small_aperture_score(beam, aperture))
return result

def aperture_area_variability(beam): """ Calculate the aperture area variability of a treatment beam.

Args:
    beam (Dataset): The beam for the aperture area variability to be calculated.
"""
result = 0
previous_cumulative_weight = 0.0
total_cumulative_weight = beam.FinalCumulativeMetersetWeight
leaf_positions = []
orthogonal_jaw_positions = []
leaf_boundaries = _leaf_position_boundaries(beam)
maximum_leaf_separation = _maximum_leaf_separations(beam)
for control_point in beam.ControlPoints:
    differential_weight = (control_point.CumulativeMetersetWeight - previous_cumulative_weight) / \
                          total_cumulative_weight
    previous_cumulative_weight = control_point.CumulativeMetersetWeight
    control_point_leaf_positions = _control_point_leaf_positions(control_point)
    control_point_orthogonal_jaw_positions = _control_point_orthogonal_jaw_positions(control_point)
    if control_point_leaf_positions is not None:
        leaf_positions = control_point_leaf_positions
    if control_point_orthogonal_jaw_positions is not None:
        orthogonal_jaw_positions = control_point_orthogonal_jaw_positions
    if differential_weight > 0:
        result += _control_point_aperture_area_variability(leaf_positions, orthogonal_jaw_positions,
                                                           leaf_boundaries, maximum_leaf_separation) * \
                  differential_weight
return result

def closed_leaf_score(beam): """ Calculate the closed leaf scores of a treatment beam.

Args:
    beam (Dataset): The beam for the closed leaf score to be calculated.
"""
return _weighted_metric(beam, 'CLS')

def cross_axis_score(beam): """ Calculate the cross axis scores of a treatment beam.

Args:
    beam (Dataset): The beam for the cross axis score to be calculated.
"""
return _weighted_metric(beam, 'CAS')

def leaf_sequence_variability(beam): """ Calculate the leaf sequence variability of a treatment beam.

    Args:
        beam (Dataset): The beam for the leaf sequence variability to be calculated.
    """
return _weighted_metric(beam, 'LSV')

def mean_asymmetry_distance(beam): """ Calculate the mean asymmetry distance of a treatment beam.

Args:
    beam (Dataset): The beam for the mean asymmetry distance to be calculated.
"""
return _weighted_metric(beam, 'MAD')

def modulation_complexity_score(beam): """ Calculate the modulation complexity score of a treatment beam.

Args:
    beam (Dataset): The beam for the modulation complexity score to be calculated.
"""
result = 0
previous_cumulative_weight = 0.0
total_cumulative_weight = beam.FinalCumulativeMetersetWeight
maximum_aperture_separation = _maximum_leaf_separations(beam)
leaf_positions = []
orthogonal_jaw_positions = []
leaf_position_boundaries = _leaf_position_boundaries(beam)
for control_point in beam.ControlPointSequence:
    differential_weight = (control_point.CumulativeMetersetWeight - previous_cumulative_weight) / \
                          total_cumulative_weight
    previous_cumulative_weight = control_point.CumulativeMetersetWeight
    control_point_leaf_positions = _control_point_leaf_positions(control_point)
    control_point_orthogonal_jaw_positions = _control_point_orthogonal_jaw_positions(control_point)
    if control_point_leaf_positions is not None:
        leaf_positions = control_point_leaf_positions
    if control_point_orthogonal_jaw_positions is not None:
        orthogonal_jaw_positions = control_point_orthogonal_jaw_positions
    if differential_weight > 0:
        control_point_aav = _control_point_aperture_area_variability(leaf_positions, orthogonal_jaw_positions,
                                                                     leaf_position_boundaries,
                                                                     maximum_aperture_separation)
        control_point_lsv = _control_point_leaf_sequence_variability(leaf_positions, orthogonal_jaw_positions,
                                                                     leaf_position_boundaries)
        result += control_point_aav * control_point_lsv * differential_weight
return result

def small_aperture_score(beam, aperture=10): """ Calculate the small aperture scores of a treatment beam.

Args:
    beam (Dataset): The beam for the small aperture score to be calculated.
    aperture (float): The aperture size to be scored.
"""
result = 0
previous_cumulative_weight = 0.0
leaf_positions = []
orthogonal_jaw_positions = []
leaf_boundaries = _leaf_position_boundaries(beam)
total_cumulative_weight = beam.FinalCumulativeMetersetWeight
for control_point in beam.ControlPoints:
    differential_weight = (control_point.CumulativeMetersetWeight - previous_cumulative_weight) / \
                          total_cumulative_weight
    previous_cumulative_weight = control_point.CumulativeMetersetWeight
    control_point_leaf_positions = _control_point_leaf_positions(control_point)
    control_point_orthogonal_jaw_positions = _control_point_orthogonal_jaw_positions(control_point)
    if control_point_leaf_positions is not None:
        leaf_positions = control_point_leaf_positions
    if control_point_orthogonal_jaw_positions is not None:
        orthogonal_jaw_positions = control_point_orthogonal_jaw_positions
    if differential_weight > 0:
        result += differential_weight * _control_point_small_aperture_score(leaf_positions,
                                                                            orthogonal_jaw_positions,
                                                                            leaf_boundaries, aperture)
return result

def _control_point_aperture_area_variability(leaf_positions, orthogonal_jaw_positions, leaf_boundaries, maximum_leaf_separations): leaf_pairs = int(len(leaf_positions) / 2) total_leaf_pair_separation = 0 total_maximum_leaf_pair_separation = sum(maximum_leaf_separations) for leaf_index in range(leaf_pairs): if leaf_boundaries[leaf_index + 1] >= orthogonal_jaw_positions[0] \ and leaf_boundaries[leaf_index] <= orthogonal_jaw_positions[1]: total_leaf_pair_separation += leaf_positions[leaf_index + leaf_pairs] - leaf_positions[leaf_index] if total_maximum_leaf_pair_separation > 0: return total_leaf_pair_separation / total_maximum_leaf_pair_separation return 0

def _control_point_closed_leaf_score(leaf_positions, orthogonal_jaw_positions, leaf_boundaries): leaf_pairs = int(len(leaf_positions) / 2) exposed_leaf_pairs = 0 exposed_open_leaf_pairs = 0 for leaf_index in range(leaf_pairs): if leaf_boundaries[leaf_index + 1] >= orthogonal_jaw_positions[0] \ and leaf_boundaries[leaf_index] <= orthogonal_jaw_positions[1]: exposed_leaf_pairs += 1 if leaf_positions[leaf_index + leaf_pairs] - leaf_positions[leaf_index] > 0: exposed_open_leaf_pairs += 1 if exposed_leaf_pairs > 0: return 1 - (exposed_open_leaf_pairs / exposed_leaf_pairs) return 0

def _control_point_cross_axis_score(leaf_positions, orthogonal_jaw_positions, leaf_boundaries): leaf_pairs = int(len(leaf_positions) / 2) axis_crossing_leaf_pairs = 0 exposed_open_leaf_pairs = 0 for leaf_index in range(leaf_pairs): if leaf_boundaries[leaf_index + 1] >= orthogonal_jaw_positions[0] \ and leaf_boundaries[leaf_index] <= orthogonal_jaw_positions[1]: if leaf_positions[leaf_index + leaf_pairs] - leaf_positions[leaf_index] > 0: exposed_open_leaf_pairs += 1 if leaf_positions[leaf_index + leaf_pairs] < 0 or leaf_positions[leaf_index] > 0: axis_crossing_leaf_pairs += 1 if exposed_open_leaf_pairs > 0: return axis_crossing_leaf_pairs / exposed_open_leaf_pairs return 0

def _control_point_jaw_positions(control_point, jaw_axis='X'): """ Return the jaw positions for specified control point, for the corresponding jaw.

Args:
    control_point (Dataset): The control point containing leaf position data.
"""
for beam_limiting_device_position in control_point.BeamLimitingDevicePositionSequence:
    if 'MLC' not in beam_limiting_device_position.RTBeamLimitingDeviceType:
        if jaw_axis in beam_limiting_device_position.RTBeamLimitingDeviceType:
            return beam_limiting_device_position.LeafJawPositions

def _control_point_leaf_positions(control_point): """ Return the leaf positions for specified control point.

Args:
    control_point (Dataset): The control point containing leaf position data.
"""
for beam_limiting_device_position in control_point.BeamLimitingDevicePositionSequence:
    if 'MLC' in beam_limiting_device_position.RTBeamLimitingDeviceType:
        return beam_limiting_device_position.LeafJawPositions

def _control_point_leaf_sequence_variability(leaf_positions, orthogonal_jaw_positions, leaf_boundaries): """ Return the leaf sequence variability for control point.

 Args:
     control_point (Dataset): The control point containing leaf position data.
     leaf_boundaries (np.array): The edge positions of leaf pairs, orthogonal to motion.
 """
leaf_pairs = int(len(leaf_positions) / 2)
exposed_open_leaf_pairs = 0
leaf_positions_left = []
leaf_positions_right = []
delta_left = []
delta_right = []
for leaf_index in range(leaf_pairs - 1):
    if leaf_boundaries[leaf_index + 1] >= orthogonal_jaw_positions[0] \
            and leaf_boundaries[leaf_index] <= orthogonal_jaw_positions[1]:
        # note that paper is not clear on whether leaf pair must be open
        leaf_gap = leaf_positions[leaf_index + leaf_pairs] - leaf_positions[leaf_index]
        if leaf_gap > 0:
            exposed_open_leaf_pairs += 1
            leaf_positions_left.append(leaf_positions[leaf_index])
            leaf_positions_right.append(leaf_positions[leaf_index + leaf_pairs])
            delta_left.append(leaf_positions[leaf_index] - leaf_positions[leaf_index + 1])
            delta_right.append(leaf_positions[leaf_index + leaf_pairs] -
                               leaf_positions[leaf_index + leaf_pairs + 1])
min_max_left = max(leaf_positions_left) - min(leaf_positions_left)
min_max_right = max(leaf_positions_right) - min(leaf_positions_right)
lsv_left = sum(abs(min_max_left - np.array(delta_left)))
lsv_right = sum(abs(min_max_right - np.array(delta_right)))
return (lsv_left / (exposed_open_leaf_pairs * min_max_left)) * \
       (lsv_right / (exposed_open_leaf_pairs * min_max_right))

def _control_point_mean_asymmetry_distance(leaf_positions, orthogonal_jaw_positions, leaf_boundaries): """ Return the mean asymmetry distance for the specified control point.

 Args:
     control_point (Dataset): The control point containing leaf position data.
     leaf_boundaries (np.array): The edge positions of leaf pairs, orthogonal to motion.
 """
leaf_pairs = int(len(leaf_positions) / 2)
exposed_open_leaf_pairs = 0
total_asymmetry_distance = 0
for leaf_index in range(leaf_pairs):
    if leaf_boundaries[leaf_index + 1] >= orthogonal_jaw_positions[0] \
            and leaf_boundaries[leaf_index] <= orthogonal_jaw_positions[1]:
        leaf_gap = leaf_positions[leaf_index + leaf_pairs] - leaf_positions[leaf_index]
        leaf_sum = leaf_positions[leaf_index + leaf_pairs] + leaf_positions[leaf_index]
        if leaf_gap > 0:
            exposed_open_leaf_pairs += 1
            total_asymmetry_distance += abs(leaf_sum / 2)
if exposed_open_leaf_pairs > 0:
    return total_asymmetry_distance / exposed_open_leaf_pairs
return 0

def _control_point_orthogonal_jaw_positions(control_point): """ Return the jaw positions for specified control point, for the jaws orthogonal to leaves.

Args:
    control_point (Dataset): The control point containing leaf position data.
"""
orthogonal = {'X': 'Y', 'Y': 'X'}
for beam_limiting_device_position in control_point.BeamLimitingDevicePositionSequence:
    if 'MLC' in beam_limiting_device_position.RTBeamLimitingDeviceType:
        mlc_type = beam_limiting_device_position.RTBeamLimitingDeviceType.replace('MLC', '')
        return _control_point_jaw_positions(control_point, orthogonal[mlc_type])

def _control_point_running_jaw_positions(control_point): """ Return the jaw positions for specified control point, for the jaws parallel to leaves.

Args:
    control_point (Dataset): The control point containing leaf position data.
"""
running = {'X': 'X', 'Y': 'Y'}
for beam_limiting_device_position in control_point.BeamLimitingDevicePositionSequence:
    if 'MLC' in beam_limiting_device_position.RTBeamLimitingDeviceType:
        mlc_type = beam_limiting_device_position.RTBeamLimitingDeviceType.replace('MLC', '')
        return _control_point_jaw_positions(control_point, running[mlc_type])

def _control_point_small_aperture_score(leaf_positions, orthogonal_jaw_positions, leaf_boundaries, aperture): """ Return the number of open leaf pair separations less than specified aperture threshold.

Args:
    leaf_positions (np.aarray): The leaf positions for the control point.
    orthogonal_jaw_positions (np.aarray): The positions of the orthogonal jaws.
    leaf_boundaries (np.aarray): The edge positions of leaf pairs, orthogonal to motion.
    aperture (float): The threshold defining a small aperture.
"""
leaf_pairs = int(len(leaf_positions) / 2)
exposed_open_leaf_pairs = 0
small_aperture_leaf_pairs = 0
for leaf_index in range(leaf_pairs):
    if leaf_boundaries[leaf_index + 1] >= orthogonal_jaw_positions[0] \
            and leaf_boundaries[leaf_index] <= orthogonal_jaw_positions[1]:
        leaf_gap = leaf_positions[leaf_index + leaf_pairs] - leaf_positions[leaf_index]
        if leaf_gap > 0:
            exposed_open_leaf_pairs += 1
            if leaf_gap < aperture:
                small_aperture_leaf_pairs += 1
if exposed_open_leaf_pairs > 0:
    return small_aperture_leaf_pairs / exposed_open_leaf_pairs
return 0

def _control_point_weight(beam, control_point_index): """ Determine the fractional differential meterset weight for specified control point.

Args:
    beam (Dataset): The beam containing the control point for which weight is to be calculated.
    control_point_index (int): The index of the control point dataset in the control point sequence.
"""
if control_point_index == 0:
    return beam.ControlPoints[0].CumulativeMetersetWeight / beam.FinalCumulativeMetersetWeight
else:
    return (beam.ControlPoints[control_point_index].CumulativeMetersetWeight -
            beam.ControlPoints[control_point_index - 1].CumulativeMetersetWeight) \
           / beam.FinalCumulativeMetersetWeight

def _has_multi_leaf_collimator(beam): """ Determines if beam has multi-leaf collimator device type.

Args:
    beam (Dataset): The beam possibly containing multi-leaf collimator device type.
"""
for beam_limiting_device in beam.BeamLimitingDeviceSequence:
    if 'MLC' in beam_limiting_device.RTBeamLimitingDeviceType:
        return True
return False

def _leaf_position_boundaries(beam): """ Determine leaf position boundaries for MLC system specified in plan

Args:
    beam (Dataset): The beam for the small aperture score to be calculated.
"""
for beam_limiting_device in beam.BeamLimitingDeviceSequence:
    if 'MLC' in beam_limiting_device.RTBeamLimitingDeviceType:
        return beam_limiting_device.LeafPositionBoundaries

def _maximum_leaf_separations(beam): """ Return the maximum leaf separations for each leaf pair over the entire beam.

Args:
    beam (Dataset): The beam containing control points with leaf positions.
"""
# pre-conditions
if not isinstance(beam, pydicom.dataset.Dataset):
    raise TypeError('Beam is not valid type.')
if not _has_multi_leaf_collimator(beam):
    raise ValueError('Beam has no multi-leaf collimator.')
number_leaf_pairs = len(_leaf_position_boundaries(beam)) - 1
leaf_separations = np.array([0] * number_leaf_pairs)
leaf_pair_indices = range(number_leaf_pairs)
for control_point in beam.ControlPointSequence:
    leaf_positions = _control_point_leaf_positions(control_point)
    for index in leaf_pair_indices:
        curr_aperture = leaf_positions[index + number_leaf_pairs] - leaf_positions[index]
        if curr_aperture > leaf_separations[index]:
            leaf_separations[index] = curr_aperture
return leaf_separations

def _weighted_metric(beam, metric): """ Return the MU weighted average of a control point metric.

Args:
    beam (Dataset): The beam containing control points.
    metric (string): The metric to be calculated.
"""
result = 0
previous_cumulative_weight = 0.0
total_cumulative_weight = beam.FinalCumulativeMetersetWeight
leaf_positions = []
orthogonal_jaw_positions = []
leaf_boundaries = _leaf_position_boundaries(beam)
for control_point in beam.ControlPoints:
    differential_weight = (control_point.CumulativeMetersetWeight - previous_cumulative_weight) / \
                          total_cumulative_weight
    previous_cumulative_weight = control_point.CumulativeMetersetWeight
    control_point_leaf_positions = _control_point_leaf_positions(control_point)
    control_point_orthogonal_jaw_positions = _control_point_orthogonal_jaw_positions(control_point)
    if control_point_leaf_positions is not None:
        leaf_positions = control_point_leaf_positions
    if control_point_orthogonal_jaw_positions is not None:
        orthogonal_jaw_positions = control_point_orthogonal_jaw_positions
    if differential_weight > 0:
        result += differential_weight * _control_point_metric_list[metric](leaf_positions,
                                                                           orthogonal_jaw_positions,
                                                                           leaf_boundaries)
return result

define metric dictionary

_metric_list = {'AAV': aperture_area_variability, 'CLS': closed_leaf_score, 'CAS': cross_axis_score, 'LSV': leaf_sequence_variability, 'MAD': mean_asymmetry_distance, 'MCS': modulation_complexity_score}

define control point metric dictionary

_control_point_metric_list = {'CLS': _control_point_closed_leaf_score, 'CAS': _control_point_cross_axis_score, 'LSV': _control_point_leaf_sequence_variability, 'MAD': _control_point_mean_asymmetry_distance}

import pydicom from complexity import plan_complexity ds = pydicom.read_file('rtplan.dcm') results = plan_complexity(ds, ['MCS']) print(results) Name MU Dose MCS 1 01 CCW 236 0.58091 0.32281 2 02 CW 263 0.71527 0.267371 3 03 CCW2 208 0.528058 0.385578 4 04 CCW 431 0.954637 0.272984 5 05 CW 407 0.830187 0.229794

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/cutright/DVHA-MLCA/issues/1#issuecomment-650425512, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABSBK67HJXXLUL5IWM3U3TDRYUMDFANCNFSM4OC2BX5Q .

yograjyog commented 4 years ago

Thanks a lot spending your time and helping me. I can able to calculate MCS alone and remaining metrics are throwing errors. As you suggested i will downgrade the pydicom and give a try, but is there any particular version i need to downgrade? Thanks again for your support.

cutright commented 4 years ago

There are a number of issues with this code, I'm afraid doing a pull request will open a giant can of worms. It also looks to me like the code was only partially updated for pydicom 1.0. He has import pydicom statements (instead of import dicom) which is a 1.0 change, but I have to downgrade to 0.9.9 for the previous tag names I mentioned to be available.

My suggestion is to contact the developer directly and see what environment he used. It looks like he has relatively good coding practices, my guess is that it works very nicely if you have the right environment. Although, you may find some TPS-specific issues still. For example, my Pinnacle step-N-shoot plans crash with MCS:
RuntimeWarning: divide by zero encountered in double_scalars return (lsv_left / (exposed_open_leaf_pairs * min_max_left)) * (lsv_right / (exposed_open_leaf_pairs * min_max_right))

I wish I had the bandwidth to dive in deeper right now.

I also posted my changes to his code on a branch here.

cutright commented 4 years ago

@sbcrowe Hi Scott, maybe you can help us out here? Would you mind posting your pip list for pyradlib (assuming you used pip)?

yograjyog commented 4 years ago

Thanks for the suggestion. In MLCA also i was having a problem like,

While i am running the MLCA , it is showing that it is analyzing but it is not displaying the results. (My environment is Ubuntu)

cutright commented 4 years ago

Can you be more specific with some code or terminal output?

yograjyog commented 4 years ago

I tried with all the codes that you have mentioned in MLCA readme

yograjyog commented 4 years ago

I tried with all the codes that you have mentioned in MLCA readme.

Right now I dont have access to the system. May be tomorrow i will send the screenshot

sbcrowe commented 4 years ago

@sbcrowe Hi Scott, maybe you can help us out here? Would you mind posting your pip list for pyradlib (assuming you used pip)?

That code was initially written for an earlier version of pydicom, and I no longer have the environment. Even if properly updated for pydicom 1.0, I'm confident the code will fail inelegantly with variations in DICOM attributes from different planning systems, different MLC systems, etc. that I wasn't expecting. That is, I don't think it is ready for use by someone not in a position to modify it to work for their use case.

SimonBiggs commented 4 years ago

@sbcrowe would there be any interest in potentially merging this work into either DVHA-MLCA or pymedphys-labs (https://github.com/pymedphys/pymedphys/tree/master/pymedphys/labs)?

yograjyog commented 4 years ago

Can you be more specific with some code or terminal output?

Screenshot from 2020-06-30 11-26-40

cutright commented 4 years ago

And there’s no CSV file created?

yograjyog commented 4 years ago

And there’s no CSV file created?

No it is not creating the csv.

yograjyog commented 4 years ago

@sbcrowe would there be any interest in potentially merging this work into either DVHA-MLCA or pymedphys-labs (https://github.com/pymedphys/pymedphys/tree/master/pymedphys/labs)?

@SimonBiggs Is there any way to try this out?

SimonBiggs commented 4 years ago

Hi @yograjyog,

@db72-git recently committed some plan complexity tools to the labs of pymedphys:

https://github.com/pymedphys/pymedphys/tree/master/pymedphys/labs/plancomplexity

He would be the best person to discuss with regarding tools that are already in the pymedphys labs.

Regarding having @sbcrowe's code merged into the labs, I do require that any code merged into the labs has someone designate themselves as responsible for that code. A "Code Owner" so to speak. Even though @sbcrowe's code is open source, I won't be merging it into pymedphys without his permission as I only want code merged into pymedphys that someone is willing to take a degree of responsibility for.

So, in short, without some more work undergone, no, there is not currently an easy way for you to try it out.

Cheers, Simon

db72-git commented 4 years ago

Hi all,

Sorry, I have been super busy lately so not got round to looking at this any further yet. I did indeed upload some complexity code to the pymedphys labs. The next thing to do with this is to make it run from the pymedphys command line which I have not done yet. So there are a few more stages I reckon before it's anything that you could try out easily but we're getting there.

It's also pretty experimental at the moment and I can only guarantee it works with Pinnacle plans but will be happy to modify it to work for others if it needs it.

cutright commented 4 years ago

@yograjyog The fastest way to resolve this is if you could provide some anonymized files that exhibit this issue. It's very peculiar that no CSV is created, it should at least create a CSV with only the column headers. But perhaps there's some OS permission issues?

There is only one try/except clause (shown below), so the code is not crashing somewhere and being ignored. https://github.com/cutright/DVHA-MLCA/blob/5d45edf8dd21781d8dfe721342ac7fa8721b0e22/mlca/main.py#L47-L52

In the mean time, I'm working on a way to more easily call the code from a python console as opposed to calling from a terminal... and adding a verbose mode so the results can be printed directly to the terminal.

yograjyog commented 4 years ago

@yograjyog The fastest way to resolve this is if you could provide some anonymized files that exhibit this issue. It's very peculiar that no CSV is created, it should at least create a CSV with only the column headers. But perhaps there's some OS permission issues?

There is only one try/except clause (shown below), so the code is not crashing somewhere and being ignored.

https://github.com/cutright/DVHA-MLCA/blob/5d45edf8dd21781d8dfe721342ac7fa8721b0e22/mlca/main.py#L47-L52

In the mean time, I'm working on a way to more easily call the code from a python console as opposed to calling from a terminal... and adding a verbose mode so the results can be printed directly to the terminal.

I have uninstalled and reinstalled, now it is creating the .csv file Thanks for your support.