mariuszhermansdorfer / SandWorm

Augmented Reality Sandbox for Grasshopper & Rhino
MIT License
20 stars 11 forks source link

Fix bugs in analysis class #27

Closed mariuszhermansdorfer closed 4 years ago

mariuszhermansdorfer commented 4 years ago

Hey @philipbelesky, it seems I have merged your most recent PR a bit prematurely. Got to test it with the Kinect just now and it doesn't seem to be working properly. Please see the linked video. Tried both in meters and millimeters with the same result.

Also, the new method is much slower than the previous one (20-25 vs. 1-3 ms). Disabling the call to Analysis.AnalysisManager.GetPixelColor(depthPoint) makes things snappy again.

image

Could you please have a look at this?

philipbelesky commented 4 years ago

Sure, happy to have a look at it tomorrow. In the model/definition is the analysis method 'elevation' and the model in mm? I might have a go at making a unit test for performance at the same time to we can get a reliable benchmark for measuring performance in future cases.

mariuszhermansdorfer commented 4 years ago

Thanks. Yeah, I thought it might be a units issue and switched to millimeters. Unfortunately it gave me the same results.

Sergio0694 commented 4 years ago

@philipbelesky @mariuszhermansdorfer Unit tests are really not the best way to measure performance, use BenchmarkDotNet! It's super easy to use and extremely powerful. It's the same library I've used to create that first benchmark I shared the other day.

philipbelesky commented 4 years ago

@Sergio0694 Calling what I was looking for as a 'unit test' was poorly chosen — I think we need a way to be able to isolate and benchmark individual functions outside of running a 'live' setup so that provide a consistent set of inputs and thus more easily A/B test changes. Will give that a go with BenchmarkDotNet, thanks for the link!

philipbelesky commented 4 years ago

Hey, I've pushed up a5422c500123726b0b9e9ff1f77c4191fb2f8aa1 and e12b20a28703d47c1c53919ede8708109587f6b7 which should address this now. Could you confirm?

I figured given the fault is in master it was OK to commit directly rather than via a PR, but let me know if you'd prefer otherwise in future. A couple of notes:

mariuszhermansdorfer commented 4 years ago

Thanks @philipbelesky! Good call with committing directly. I’ll test it later today and let you know about the results.

Having glanced over the code I suspect the use of dictionary to be responsible for the performance hit. Would going back to an array type speed things up? Using millimeter values as indices would solve your second question about having to specify a maximum value for the color range. Maximum is zero, minimum - sensor elevation. (It sounds reversed but you know how the Kinect sees the world.) If we were to approach it this way, I’d then introduce a user-defined color range span parameter which would allow for finer control over the visual appearance of the gradient. (Working with a model that’s only 5cm tall requires a different gradient resolution to a 30cm tall model.) Similar to what I’ve shown in the video demo linked in #20 It could look like this: Array.Clear(vertexColors, 0, sensorHeight - colorRampSpan); for (int i = sensorHeight - colorRampSpan; i < sensorHeight; i++) vertexColors[i] = assign color logic;

philipbelesky commented 4 years ago

Ok, so assuming a 10ms baseline, changing to an array lookup rather than a dictionary lookup seems to shave off about 3ms.

Skipping calling into the AnalysisManager and instead calling the subclasses directly seems to shave another 3ms off. I suspect the remaining time to the previous ideal implementation is thus related to the overhead of calling into the objects and their method inheritances, rather than accessing the array directly within the rows/columns loop?

If so maybe the role of AnalysisManager should be rethought and its job should be to create a local copy of a single lookup table before entering into the rows/columns loop? Realistically the only time the lookup tables of different analysis will overlap is when the water level option is being combined with some other option, so maybe we should just hardcode for that scenario or intelligently create a merged lookup table which wont have clashing values?

I'm not sure how that works outside of situations where the elevation index of the pixel maps nicely to the table though. If the calling into the objects is indeed a bottleneck then it seems like we either need to accept it in cases like slope/aspect, or shift the logic for those calculations out of Abstractions.cs. Maybe setting the relevant function as a Delegate would be a way around having to call into the objects each loop?

mariuszhermansdorfer commented 4 years ago

I had a look at the latest changes - now it works as expected. Having merged the branch with various speed improvements, which Sergio contributed to, it really is a bit disappointing to have such a slowdown just because of the way how we call the AnalysisManager. Short video update here

I like the progress you've made with this class so far, @philipbelesky. My tests seem to confirm your observations - after your recent fixes we have approx. 4ms worth of overhead. image

I agree with your suggestion to simplify the AnylysisManager. How about going with separate lookup tables for different types of analysis?

  1. elevationLookupTable[sensorElevation - topBound], where topBound is a user-defined value to tighten/loosen the gradient ramp
  2. slopeLookupTable[90]
  3. aspectLookupTable[360]

The above would be calculated once and only re-calculated on user input. They would work as proper lookup tables i.e. calculated value for a given type of analysis is used as index in the corresponding array. Following this logic, elevation color display will always be fastest (there is no need to calculate anything, just take the depth pixel value and use this as index for the elevationLookupTable). Slope, aspect etc. will take more time to calculate, but after the corresponding function has been computed, looking up the color should be instantaneous.

philipbelesky commented 4 years ago

The lookup tables are already separate in that each object inheriting from MeshVisualisation calculates and stores its own distinct lookup table. The spanner in the mix is the use of the water level visualisation which means two different lookup tables need to be consulted or a single 'merged' lookup table created.

I think we can, hopefully, continue with having the construction of the lookup table itself handled by the different classes in Analysis.cs as these should able to be extended nicely with user-defined ramps and to support future analysis types. However, to avoid the seeming overhead of calling into objects as part of the per-pixel loop, the lookup table can be reported by the AnalysisManager and set as a local variable before entering the per-pixel which should hopefully avoid the overhead of calling the objects within the loop itself. I've setup #29 to track this.

As noted those lookup tables should be cached and only updated each time a relevant parameter changes rather than each time the definition calculates. I intended to get this going, but the time to construct the table should be pretty minimal and the prior lookup table was per-calculation anyway. I've setup #30 to track this.

philipbelesky commented 4 years ago

Note: I'm going to go ahead and close this as while the current performance is still a regression, the coloring function appears to work fine now and there are more specific issues to further investigate the performance issues. Feel free to reopen if its easier to continue the discussion here though!