Open chh105 opened 3 years ago
Indeed dev_4DVMAT is the most up-to-date branch. As you point out, the file and code structure is quite different than the master, since I started working on it a few years back and unfortunately did not keep it up to date with the master.
I believe that the dev_VMAT_merge branch is still a complete implementation of VMAT, but there were some tweaks that I made to the optimization algorithm in dev_4DVMAT that aren't reflected in dev_VMAT_merge.
Hopefully you can follow along with what I said, and make a decision based on that information. If you're doing a research project and need something now, then I think I would suggest dev_VMAT_merge for perhaps a cleaner and more consistent implementaiton.
I would like to (finally!) merge all of these branches into one, and then finally into the master. Unfortunately I can't give you a timeframe on this, mostly because I'm currently doing a clinical residency. @wahln, maybe this is a good reminder (for me mostly) that it's time to take a look at this again?
Hi everybody, yes, it would be a fantastic idea to finalize the merge in #377 which attempts at bringing both branches together. I think the underlying branch seems stable (i.e. it passes the tests), but shows some weird results. I will allocate some time in april to get back into it, maybe I can figure out what's wrong so far so that in the end you, @eric11210 , have a better foundation for merging your new developments.
Thanks for the quick replies and I hope Eric's residency is going well. As a follow up question, do you have any recommendations on what settings to use for VMAT that may be more stable. For example, would the settings in this file be a good starting point? example from dev_VMAT_merge
Thanks @chh105!
Yes, that would be a good place to start (that was the intention behind that script 😄).
@wahln, what weird results do you mean? Is it related to the optimization, or something else?
on the dev_VMAT_merge, there are some things going wrong in the aperture optimization (see my comments in #377). It does not produce the same results as the dev_VMAT at the moment, and is (with high certainty) my fault so far. Will check it out again soon, I am quite confident I can figure out the remaining issues.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi @wahln, just wanted to get some help with running the dev_vmat_merge branch. I've linked the modified prostate phantom (with various constraints/objs) and the photon vmat script. When I tried running the photon vmat script, it would crash my matlab instance whenever it attempted to run DAO. I was wondering if you had experienced similar issues or had any insights as to why this was happening
@wahln (sorry, I don't mean to spam): After doing more testing, I found that my matlab will crash whenever DAO is run with constraints (not just in the vmat case). I've linked the DAO example script and the prostate phantom (with various constraints) to reproduce the crash. Is this something that you've experienced as well, and if so, is there some workaround to incorporate constraints while performing DAO? Thanks for your help
Actually that shouldn't happen, thanks for reporting this bug. Will try to look at it asap, but it might take a few days.
Sure, I just wanted to make sure it wasn't only happening on my end. Thanks for taking the time to look into it.
Just as a quick update, I tried switching the optimizer to fmincon's interior point version instead of IPOPT and got the following error trace:
Index exceeds the number of array elements (914).
Error in matRad_OptimizationProblemDAO/matRad_constraintJacobian (line 101)
.*
jacob_dos_bixel(:,apertureInfo.bixelIndices(apertureInfo.totalNumOfShapes+1:apertureInfo.totalNumOfShapes+apertureInfo.totalNumOfLeafPairs*2))
./ ...
Error in matRad_OptimizerFmincon/fmincon_nonlconWrapper (line 105)
cJacob = optiProb.matRad_constraintJacobian(x,dij,cst);
Error in matRad_OptimizerFmincon>@(x)obj.fmincon_nonlconWrapper(x,optiProb,dij,cst) (line 72)
@(x) obj.fmincon_nonlconWrapper(x,optiProb,dij,cst),...
Error in fmincon (line 650)
[ctmp,ceqtmp,initVals.gnc,initVals.gnceq] = feval(confcn{3},X,varargin{:});
Error in matRad_OptimizerFmincon/optimize (line 67)
[obj.wResult,fVal,exitflag,info] = fmincon(@(x)
obj.fmincon_objAndGradWrapper(x,optiProb,dij,cst),...
Error in matRad_directApertureOptimization (line 126)
optimizer = optimizer.optimize(apertureInfo.apertureVector,optiProb,dij,cst);
Error in matRad_example3_photonsDAO (line 102)
resultGUI = matRad_directApertureOptimization(dij,cst,resultGUI.apertureInfo,resultGUI,pln);
Caused by:
Failure in initial nonlinear constraint function evaluation. FMINCON cannot continue.
This error is occurring in the dev_VMAT_merge branch. I don't think that this error occurs for previous branches (i.e. dev_VMAT, master, etc.), but I wasn't able to test them fully due to errors in the example scripts.
Thanks for the error messsage. From that and some other quick tests I think that the error is caused by a mistake in how the leaf constraints are joined with the dose constraints in the jacobian computation. Will get to the bottom of this soon.
I have started to look into the issue on the master branch, where using additional constraints did not work. I think I fixed it in #506. Will check this out in connection to the VMAT issues next.
Great, thank you so much! As far as I can tell, those changes fixed crash/error during DAO. I was also able to run through VMAT example script #8 without any crashes.
Great, thanks for trying it out so quickly!
As another quick update, I just realized that the fixes were applied to the master branch which doesn't support vmat (so running example 8 was just running DAO with many angles and not vmat). Sorry if that causes any confusion.
I just tried running the dev_VMAT_merge branch with the updated matRad_constraintFunctions.m and matRad_getJacobianStructure.m files. It seems to still crash with the same error trace as before. I'm happy to continue testing if it helps!
Yeah, I guess that I need to merge this carefully into the dev_VMAT_merge and also make some adaptations towards VMAT. I will start doing this and aks for a test run from you as soon as I get something that seems to work. Thanks again.
First of all, thanks to you @chh105 . Your investigation of the VMAT behavior made me find some of the stuff that was wrong with the VMAT merge. I've updated the dev_VMAT_merge branch with my fixes for now, it should now at least run through with dose constraints. Maybe you want to check if this works as expected on your system, too. There's still something off in the VMAT direct aperture optimization. When I am optimizing without preconditioner, the solution makes sense but is far from "optimal" with the standard optimization settings. This is what you would expect, since it just converges too slowly without preconditioning. Turning the preconditioner on gives weird results. It has to do with some of the reorderings I did in the code. I will continue my investigations...
By the way: The problem was the jacobian computation, which had some old code in it and did not rely on the pre-computed beamlet/aperture jacobian like it is used in the gradient computation. This is the same issue for the original dev_VMAT branch.
@wahln Thanks again for the updates. Your explanation about the jacobian computation makes sense and is consistent with the errors I was getting. I'm really excited to try out the fixes, and I'll let you know how it goes!
I'm happy to report that VMAT DAO with dose constraints runs now without crashing. Here are some of my initial impressions. After quickly running some tests on the prostate phantom, I've noticed that the obj. function value, constraint violation (primal infeasibility), dual infeasibility, etc. are all much higher than normal (obj function value starts at 1e7 instead of the usual ~1e2).
You mentioned that this might be due to something happening with the preconditioner. I just started reading through the relevant paper (noncoplanar VMAT). As an aside, I wish I had known about this paper earlier, as it's really well done and would have saved me a lot of time trying to figure out noncoplanar BAO. Anyways, it seems that the precondition factors are used to scale the leaf and aperture weight gradients. Do you think that the error is occurring in the computation of these precondition factors? Based on the large values for the obj function, constraint violation, etc. I was wondering if it might be due to the starting solution for DAO.
If it helps, here is the dvh after running VMAT DAO. The prescription dose was 2.267 Gy, but the value for D95% was 0.0065
Looks to me like the Jacobi preconditioning factors are not being used correctly in the code. These represent a factor between actual aperture weights and the elements in the optimization vector corresponding to those weights. They can easily get mixed up (probably due to the way I coded it...)
Yes that is most likely the reason. I reorganized some parts of the code to make some things possible within the new optimization structure, but I may have jumped to conclusions in some parts. Two questions:
@wahln The DVH result was from optimization with DVH constraints, min/max dose constraints, and EUD constraints (prostate phantom with constraints)
@wahln I intended the preconditioner to be used for DAO generally, i.e. not restricted to VMAT only. I did do some functionality testing on the original code without VMAT, so it should work.
I think I found the bug. On the one hand, it was a stupid merging mistake by me - I accidentally omitted the dij.scaleFactor
coming from the preconditioner in the backprojection. On the other hand, I am not sure if I want the scaling factor to be as intrusive, i.e., it being written into the dij, because it is actually a part of optimization and not of the dij itself. But maybe, that's the smoothest way - I have to think about it... for now, the changes are on the dev_VMAT_merge branch, and I think now we have a merged version that is running correctly in the default setup, at least.
@wahln That's probably it. Agreed, it makes more sense to separate it from the dij and put it with an optimization structure. I think I did it this way since this factor was always multiplying elements in the dij. But it can lead to confusion, because it's not clear from the name and location exactly what it's doing.
I ran some tests with the updated dev_VMAT_merge on the same prostate phantom with constraints:
VMAT ran without crashing, but the obj function, constraint violation, etc. during DAO were still high. I got the following DVH which shows improvements over the previous one but still violates the various constraints.
Regular DAO (example script 3) crashed with the same error trace as before:
Index exceeds the number of array elements (1196).
Error in matRad_OptimizationProblemDAO/matRad_constraintJacobian (line 103) . jacob_dos_bixel(:,apertureInfo.bixelIndices(apertureInfo.totalNumOfShapes+1:apertureInfo.totalNumOfShapes+apertureInfo.totalNumOfLeafPairs2)) ./ ...
Error in matRad_OptimizerFmincon/fmincon_nonlconWrapper (line 105) cJacob = optiProb.matRad_constraintJacobian(x,dij,cst);
Error in matRad_OptimizerFmincon>@(x)obj.fmincon_nonlconWrapper(x,optiProb,dij,cst) (line 72) @(x) obj.fmincon_nonlconWrapper(x,optiProb,dij,cst),...
Error in fmincon (line 650) [ctmp,ceqtmp,initVals.gnc,initVals.gnceq] = feval(confcn{3},X,varargin{:});
Error in matRad_OptimizerFmincon/optimize (line 67) [obj.wResult,fVal,exitflag,info] = fmincon(@(x) obj.fmincon_objAndGradWrapper(x,optiProb,dij,cst),...
Error in matRad_directApertureOptimization (line 130) optimizer = optimizer.optimize(apertureInfo.apertureVector,optiProb,dij,cst);
Error in matRad_example3_photonsDAO (line 103) resultGUI = matRad_directApertureOptimization(dij,cst,resultGUI.apertureInfo,resultGUI,pln);
Caused by: Failure in initial nonlinear constraint function evaluation. FMINCON cannot continue.
Thanks again @chh105 for your enduring testing efforts!
So I could confirm the DAO problem, which was due to a change in the dimensionality in the bixelWeight
variable. I pushed an update and now it should work. However, so far it only seems to work reliably with fmincon. WIth IPOPT, there is some mismatch between the pre-computed jacobian structure and the evaluation, i.e., there are elements filled with non-zero values during jacobian computation that were defined as zero when setting up the structure that is used by IPOPT. Still need to check figure out why.
I could not confirm the continuing VMAT problem - what dose constraints are you using? It might just be, that the problem is hardly feasible.
Of course, I'm happy to help!
For the DAO problem, I'll try running the updated version today with fmincon to see if it works.
For VMAT, I've linked the exact constraints I used here. The constraint setup seems to be feasible when doing FMO. Would it be incorrect to assume that a feasible FMO problem would be infeasible for VMAT DAO? Also when testing this issue with IPOPT, don't forget to set the constraint violation tolerance to a small value like 1e-3 (by default it's set to 1e10)
As a quick update, I just verified that both VMAT DAO and regular DAO for the dev_VMAT_merge branch run without crashing. It seems that there are constraint violation issues when performing VMAT DAO and regular DAO. I've saved two configurations of the prostate phantom as ".mat" files in order to reproduce these issues.
I wouldn't say that feasibility can be easily assumed for DAO/VMAT when FMO is feasible. The segmentation into shapes puts quite some restrictions into how doses can be achieved. But it should work for simple constraints (at least that's how I tested it) - for example a single lower dose bound in the PTV could be achieved by scaling up a shape weight. Will need to find some time to do some more in-depth tests...
That makes sense. Thanks for the explanation and for looking into this issue
I had some time today to run some more tests on the dev_VMAT_merge branch. From what I can tell, DAO with only objectives seems to work fine. However, DAO with dose objectives & constraints does not work as intended, even with simple constraints.
The first DVH below is from DAO with quadratic objectives for the ptv and body (seems to work fine):
The second DVH below is from DAO with quadratic objectives for the ptv and body, as well as a DVH constraint for the ptv (not working):
I'll try repeating these tests on the master branch to see if it's just an issue with the dev_VMAT_merge branch
Hi @wahln and @eric11210 , hope everything is going well! Just wanted to check in with you guys since I haven't heard back in a couple weeks.
I tested FMO, FMO+DAO, and VMAT (FMO+DAO) with very simple DVH constraints (D95 > prescription dose). All of these tests in the dev_VMAT_merge branch had very large constraint violations. It seems to me that there is an issue with the dose constraints implementation only in the dev_VMAT_merge branch. These issues did not occur for the master branch, so I'm thinking that there were some changes made in this particular branch that are causing dose constraints not to work properly. When you have the time, would it be possible to get some more help with this issue?
Hi again! I realize you both are very busy, so I'll try to keep it brief. I just had a couple conceptual questions that I'm hoping to get answered.
It seems to me that the main branch of matRad supports FMO, leaf sequencing, and DAO. The dev branch, however, has weird issues when performing optimization with dose constraints. Would it be a bad idea to perform FMO -> sequencing -> DAO manually with angles spaced every 2 degrees instead of using the dev_VMAT_merge branch? Conceptually, what would be the main differences between manually running these components (FMO, sequencing, DAO) and running the dev_VMAT_merge branch?
Hi! Sorry for keeping it quiet for quite some time, but I will get back to looking into the VMAT branch within the next days.
I guess there is still some issue in the constraint jacobian computation. I think I will try to copy the implementation that's on the master branch to validate this and hopefully figure out what is going wrong. For the VMAT set-up, I recently tested min/max constraints which seemed to work on a simple phantom set-up after many iterations (the convergence was slow and somewhat weird). For DAO without VMAT, there still seem to be issues with IPOPT (related to the structure of the jacobian, which indicates the mentioned issues in constraint jacobian), but the solution seemed sound with fmincon. This whole variation in behavior that I induced with my merge efforts makes it really difficult to pin down the issues, sorry...
While I think that @eric11210 would be able to give you a more detailed answer, in principle, that is also the workflow that is implemented in the VMAT branch. However, doing this without adaptation on the master branch will probably end up in non-deliverable VMAT plans, since you need to factor in maximum leaf speed and also make sure you have connected, single shapes for DAO. But if you find a workaround, you might get something.
@wahln What you've said is correct. @chh105 if you perform FMO, sequencing, DAO without the VMAT option turned on, but an eg 2 degree angle spacing, essentially what you'll get is a very dense IMRT plan, with many apertures at each angle. Part of what the VMAT option does is take only a single aperture for each angle, and then optimize that one (and then also force max leaf speed constraints etc.)
Thank you both for the explanations. That helps to resolve my previous confusion with the different approaches.
For the VMAT issues, I completely understand that it's very tricky to debug, and I'm grateful that you haven't given up on it. If you need me to run some tests or help out in some way, I'm would be happy to do so
Hi again, I just wanted to post an update about a potentially related issue that I've found. I tried running FMO with a single quadratic objective for the PTV for the prostate phantom with 5-9 beams on the latest matRad branch. These results worked as expected (i.e. the PTV dose volume histogram falls off at the reference dose).
If I then run the same FMO problem (single quadratic objective for the PTV) with a larger number of beams (~30 beams to match the number used in VMAT), I get the issue shown in the second picture where there seems to be a large amount of PTV inhomogeneity. Strangely, the final objective function value for the ~30 beam scenario was much lower than the final objective function value for the ~5-9 beam scenarios. Yet, the ~30 beam dose distribution shows much larger deviations from the reference dose
Could this be related to the issues with the VMAT branch, since that branch also uses a large number of beams for FMO?
Thanks for conitnuing the investigation here. I didn't quite stick to the promise of looking more into the jacobian, but I hope I can do this in August. Few questions from my side:
No worries, I was just trying to keep a log of the things I found while testing various branches. To answer your questions,
As another comment, I did some other tests with the dev_VMAT (not merge) branch, using only quadratic objectives without any constraints. I found that IPOPT converged very quickly (within 100 iterations) and the obj value plot continuously decreased. I think this is expected since using only quadratic objectives would make this a convex problem, but maybe I'm wrong about that. However, when testing quadratic objectives on the dev_VMAT_merge and hotfixes/daoWithConstraints branches, I was not getting the same convergence (the plot of the obj value had long intervals where the obj value would increase)
Thanks for the info! What might be the issue here (as a first guess) is the resampling of the grids between the ct and the 5mm dose grid. The dose grid (5mm in this case) is used for dose calculation and optimization. At the end, the dose will be interpolated/extrapolated to match the ct grid again, and the DVH will be evaluated on this grid. The problem here often lies in the downsampling of the cst masks to a worse resolution, which is done with nearest neighbors. So the optimization very homogeneously covers the PTV with that many beams (i.e., very low objective function value), but may create very large doses in the surrounding tissue, which in resampling to the ct grid gives the overdose inhomogeneity in the PTV of the original resolution. Best way to test this is to set the dose calc resolution to the ct resolution.
I don't know if there would be a good solution to this resampling issue, I guess its something to live with when using too coarse grids. For dosimetric accuracy I would go with <= 3mm for the dose grid.
Hope everything is going well! As another update, I've been testing the dev_VMAT branch this time without any dose constraints. It seems that vmat works fine (as far as I can tell) for that branch. I'm currently trying to test vmat planning with multiple arcs, so I'm trying to modify the gantry angle file. What would be a good way to modify this file to incorporate additional arcs? (I'm assuming just adding additional angles to the pln structure would be a bad idea)
I had another quick question about using the vmat package (dev_VMAT). I'm currently trying to create a dual arc vmat plan where I created an initial vmat arc from -180 to 180 (4 degree spacing) and a second arc by copying the initial arc and offsetting by 362. I'm noticing that the memory usage is exceedingly large (on the order of 100GB of RAM). Is this typical for vmat planning, and would it be possible to reduce this memory usage somehow?
Sorry for not replying to you about the double arc thing before. This is something that I have not implemented, so no guarantees that it works.
A couple of things: 1) To answer your memory question, the dij matrix is by far the element which consumes the most amount of memory. By creating a dual-arc VMAT plan in this way you are essentially doubling the memory requirements. This is probably unnecessary, since 4 degrees is usually enough resolution from the dose calculation point of view. You could probably "trick" the dose calculation functions into using the same dij elements for gantry 0 and gantry 360, for example.
2) If there was no additional modification to the code other than the offset, then technically there will be leaf speed constraints between the end of the first arc and the beginning of the second. This is not really required, but it is probably simplest to leave it as-is.
3) Did you do this offset for all of the angle sets? Eg gantryAngles, DAOGantryAngles, FMOGantryAngles? BTW that is a clever trick :)
Hi Eric, thanks for the quick reply.
Also I recently read through your paper on the continuous vmat apertures. I noticed that you used a DAOGantryAngle spacing of 4 and a gantryAngle spacing of 2 for head and neck cases. Would it be better to use that type of scheme (where you are doubling the vmat control points) instead of adding another arc?
After thinking about it some more, I think it would be easiest to attack it from the other direction: the fluence. Something like (fluence @ gantry 0) = (fluence @ gantry 0) + (fluence @ gantry 360). This calculation would have to be done in each iteration of the optimizer. The gradient would also probably have to be modified somehow to account for this. Not sure how much digging into the code you've done. If you're stuck for how to do this let me know and I can give you some pointers.
Probably a little bit of both, but if I had to guess the impact would be minor. Basically both arcs would be linked, whereas in principle they could be independent, so you are removing a degree of freedom.
Okay good, I think that's the right way of doing it. I guess it probably wouldn't have run any other way.
To your last question, I think what you are doing is correct. If you only double the VMAT control points, you aren't creating an independent arc. Instead you're essentially just interpolating dose calculation control points from the optimization control points.
Thanks again for the thoughtful responses. I think for now I'll try messing around some more with the angle spacing, dose resolution, etc. to get things to fit into the capacity of my RAM.
Last thing is that I also noticed that sometimes FMO wouldn't converge when there were a large number of beams (i.e. the FMO spacing was set below 28 degrees to something like 14). Is this also something that you noticed before?
I had a question about the various VMAT branches (i.e. dev_VMAT_merge, dev_VMAT, dev_4DVMAT, etc.). I'm hoping to pick one to use for a research project and was wondering which of these branches would be the best option. It seems the dev_4DVMAT one is the most up-to-date (according to what I was told in a previous github post), though the file structure is completely different from the master branch of matRad. Other branches like dev_VMAT_merge have similar file structures to the master branch but may not be as up-to-date (?).
Is there a working branch for VMAT that you guys would recommend? As always, thanks for your help and advice!