Closed MichaelCurrie closed 7 years ago
I did some additional profiling of compute_skeleton_and_widths
, the function taking up 85% of the running time, and found this: (running this my laptop so the speeds won't be comparable, but you can see the relative proportion of running time at least:)
'h__updateEndsByWalking': 26.249999998137355,
'hstack': 0.4170000005979091,
'transpose': 6.172999997623265,
'h__roundToOdd': 28.43499999353662,
'h__computeNormalVectors': 1.4570000020321459,
'h__getBounds': 0.7490000000689179,
'final calculations': 0.6220000004395843,
'h__getMatches': 108.88700000755489
So it looks like we have h__getMatches
, h__roundToOdd
, and h__updateEndsByWalking
that could be worked on to optimize.
Alternatively, I believe @ver228 has an alternative compute_skeleton_and_widths
method, is this correct? The method specification is:
input: h_ventral_contour
, h_dorsal_contour
output: h_skeleton
, h_widths
.
All input and output variables are "heterocardinal", that is, able to have different number of points per frame.
Hello,
It took me longer than expected, but I linked the Shafer basic contours with my python code. I compared with the pre-features _compute_skeleton_andwidths, either taking all the contour points, or by normalising first the contour (taking only 49 equally spaced points). The results time in my laptop are:
Avelino: 3.97s OpenWorm: 31.13s OpenWorm Normalised: 12.08s
The code I used is in: https://github.com/ver228/Multiworm_Tracking/blob/master/work_on_progress/Openworm_tests/segworm_skeleton.py
The package that does the segmentation is in: https://github.com/ver228/Multiworm_Tracking/tree/master/trackWorms/segWormPython
You might have to compile the cython files since my buildings are for osx. The code to create the binaries is in https://github.com/ver228/Multiworm_Tracking/tree/master/trackWorms/segWormPython/cythonFiles You have to run: python3 setup.py build_ext --inplace
I guess it would need an specific C/C++ compiler. In osx is quite trivial, but I do not know how to do it in windows.
As conclusions, my current version based in segWorm is faster, but the OpenWorm might be more accurate (just by looking at the widths plots). It might be possible to improve the openWorm skeletonization by re-writing some of the code in cython or C, and/or downsampling the number of points in the contour.
Cheers, Avelino
Based on this I agree @ver228 that it seems at least for the one example video @JimHokanson's method (labeled as "OpenWorm"), while slower, is more accurate.
Given that @KezhiLi's CV algorithm is currently taking 50x slower than realtime to process frames, optimizing compute_skeleton_and_widths
might not be something we need to worry about as it will not be the bottleneck in the processing, since 60 seconds to process a 193-second video means it's still operating at a factor of about 3x faster than realtime.
I'm not 100% sure that my algorithm works on problematic videos where as the original algorithm supposedly handles lots of problematic contours. I'd like to incorporate the work of @ver228 but I'm not sure the best way forward since I'm a bit hesitant to throw the Multiworm_Tracking repo into the mix. Perhaps my hesitation is that currently the Multiworm_Tracking repo seems a bit scattered and ephemeral. If it isn't and indeed the goal is to continue to build up Multiworm_Tracking rather than using it for a bit of testing here and there I'd like to discuss if we want to start writing code that depends on it.
Link to repo: https://github.com/ver228/Multiworm_Tracking
P.S. Avelino, I know sometimes sentiment is really hard to understand via typing, so I'd like to clarify that the above is meant in the nicest way possible, I just don't see the big picture of where the repo is going
The plan is definitely to continue to build up Multiworm_Tracking to the point that you would be comfortable writing code that depends on it. @ver228 and I were just discussing this yesterday. Could we have a discussion about what would be required?
Re: speed, @KezhiLi's algorithm may only be run on relatively rare challenging segments (e.g. self-intersections, thick food, and worm-worm collisions) so it may not be the bottleneck. Also, while it's better than realtime on single worms (which is an excellent improvement) the multiworm case takes longer.
Hello,
I finally finished to organised and document the Multiworm_Tracking repository. I guess there are still a few things to do, like upload example files, but I would do them in the next days. @JimHokanson no offence taken, the repo was a bit scattered or not that well organised, that's why it looked more ephemeral that it really is. I grouped all the code that I consider finished or almost finished in the folder MWTracker. Any feedback would be useful.
Cheers, AEJ
@ver228 @aexbrown The changes look good. I think the way we'll move forward then is in issues and pull requests targeted at that repo. I'll try and provide suggestions as issues when I have a bit more bandwidth.
As far as this issue is concerned, I'll work on incorporating calls to Avelino's code into the codebase. There will probably also be some pull requests made to the SegwormPython code as a result.
Thanks, Jim
Hello, I uploaded a small sample video and of the output files of the multiworm tracker into:
I generate the output using the script https://github.com/ver228/Multiworm_Tracking/blob/master/examples/process_single_file.py.
Cheers, AEJ
I suspect that this issue can be closed. @JimHokanson please re-open if not.
Running
test_normalized_worm_creation.py
, I've found that 85% or 59.311s out of the total running time of 69.91s is from computing the skeleton and widths from the contour.About 60% or ~6s of the rest of the time of ~10s just comes from loading the
.mat
files, which probably can't be sped up, so speeding up skeleton and widths code likely our "final frontier" of low hanging fruit pre-features -> features optimization.