mariuszhermansdorfer / SandWorm

Augmented Reality Sandbox for Grasshopper & Rhino
MIT License
20 stars 11 forks source link

SandWorm 2.0 UI Refresh #76

Closed mariuszhermansdorfer closed 2 years ago

mariuszhermansdorfer commented 3 years ago

@BarusXXX, @philipbelesky, as you know, I've recently completed another Grasshopper project - GreenScenario. It uses custom UI elements to flatten the learning curve for newbies. Now, I'd like to adopt the same UI for SandWorm. Here are the basic elements in collapsed (left) and expanded views:

image

My suggestion would be to go with the following structure:

Traditional Grasshopper Input Parameters:

  1. Calibrate (boolean, button input)
  2. Reset (boolean, button input)
  3. Color gradient (list of colors) (as requested in #71 )
  4. Existing terrain data (custom class input) <-> used for cut & fill analysis

Group: Sensor

  1. Sensor type (drop down -> Kinect for Windows | Kinect Azure Narrow | Kinect Azure Wide)
  2. Sensor elevation (slider -> automatically scaled to drawing units)
  3. Left columns (slider -> automatically scaled to 0 - selectedDepthWidth / 2)
  4. Right columns (slider -> automatically scaled to 0 - selectedDepthWidth / 2)
  5. Top rows (slider -> automatically scaled to 0 - selectedDepthHeight / 2)
  6. Bottom rows (slider -> automatically scaled to 0 - selectedDepthHeight / 2)

Group: Analysis

  1. Output type (drop down -> Mesh | Point Cloud)
  2. Analysis type (drop down -> None | RGB | Elevation | Slope | Aspect | Cut & Fill)
  3. Color gradient range (slider -> automatically scaled to drawing units) <-> Current hard-coded cut-off point isn't flexible
  4. Contour interval (slider -> automatically scaled to drawing units) <-> Setting to 0 turns contour display off
  5. Water level (slider -> automatically scaled to drawing units) <-> Setting to 0 turns water level display off
  6. Rain density (slider) <-> Future functionality defining the distance between raindrops falling on the model

Group: Post-Processing

  1. Averaged frames (slider -> 1 to 30 range)
  2. Blur radius (slider -> 1 to 15 range)

Traditional Grasshopper Output Parameters:

  1. Geometry (tree -> Mesh | Water plane | Contour lines | Flow lines)
  2. Options (custom class)

Doing this would allow us to merge three components into 1 (Options, Mesh, Point Cloud) and have a plug & play setup, which shouldn't present a barrier to beginner users. I've left out the TickRate and KeepFrames parameters. Would you like to keep them, and if so why?

Am I missing any other functionality? Would this change break any of the existing workflows with Fologram or color markers?

mariuszhermansdorfer commented 3 years ago

Here is my first stab at it. A few screengrabs showing various groups in their collapsed/expanded state: image

The component doesn't do anything yet, it's just the UI. You can play with it in this branch

philipbelesky commented 3 years ago

Thanks for the code and for the proposal - I've always been interested in these kinds of in-component custom UIs but haven't had a chance to see a working code sample. Some comments below:

Calibrate and Reset

In the cases of Calibrate and Reset, would these be better placed as custom UI buttons? I always thought that these kind of 'temporary' boolean parameters in Grasshopper were not very user-friendly - e.g. you needed to know to use the one-off button component rather than use a component that would permanently provide a true value. Presumably, if they were custom buttons we could enforce the proper 'one touch' logic here.

Sensor Elevations

While I haven't used the custom UI sliders proposed, my concern would be that traditional sliders would be a better UX in most cases given that:

Geometry Output Parameters

I'm not sure about the logic and need to combine the Mesh with the analysis geometry - this makes it more difficult to seperate out each piece, particularly for new users who are not fully on top of tree path logic.

Given the component 'knows' what analysis is currently engaged, I would propose that we just dynamically add the output parameters needed with specific labels and geometry types. This should provide a good UX as the relevant parameters appear on-demand.

Tick Rate

I find tick rate very valuable as it seems necessary to control when:

  1. Users are working on lower power machines
  2. Users are performing extensive downstream analysis on the mesh

Without this affordance I've found it very easy to get into a frustrating situation where the definition is essentially locked/unresponsive as the definition takes longer to calculate than the mesh takes to generate.

One way around this would be to implement the entire logic as an asynchronous operation (per https://github.com/specklesystems/GrasshopperAsyncComponent; I've implemented this in https://github.com/philipbelesky/Caribou). Even in this case though, I think tick rate control may be necessary to run certain definitions.

Keep Frames

I had imagined the keepframes functionality would be very valuable for when you want to keep a record of the mesh's past, e.g. if simulating erosion in the sandbox itself. This is probably pretty niche though, and I think could be accommodated in the definition itself using the delay-propogate component and a baking component. So no problems removing that option.

mariuszhermansdorfer commented 3 years ago

Calibrate and Reset

Presumably, if they were custom buttons we could enforce the proper 'one touch' logic here.

I agree, these make more sense in the custom UI - it's just, that I don't have a button UI element developed yet :) I could add this to the CustomComponent section, though. Alternatively, we could add logic which automatically connects GH buttons to the SandWorm component the moment it's placed on the canvas. I'm doing this in GreenScenario already and it's very straight-forward.

    public override void AddedToDocument(GH_Document document)
    {
        PointF pivot;
        pivot = Attributes.Pivot;
        var generate = new Grasshopper.Kernel.Special.GH_ButtonObject();
        generate.CreateAttributes();
        generate.NickName = "Generate";
        generate.Attributes.Pivot = new PointF(pivot.X - 250, pivot.Y - 11);
        generate.Attributes.ExpireLayout();
        generate.Attributes.PerformLayout();
        document.AddObject(generate, false);
        Params.Input[0].AddSource(generate);
    }

Sensor Elevations

  • Precise controls/adjustments can be made by the user.

Double clicking on a slider allows users to define a slider domain and precise input. There is a tooltip hint about this functionality, but I can imagine this being non-intuitive for beginners. But then, it would be the same for a native GH slider - one also needs to know, that there are more options hiding under the right-click menu... image

  • You can create multiple setting objects and 'reuse' the values easily (perhaps this is only useful during development?)

Where else do we need sensor elevation as input? It would be coming out as one of the options from the SandWorm component anyway. Would this be enough?

Geometry Output Parameters

Given the component 'knows' what analysis is currently engaged, I would propose that we just dynamically add the output parameters needed with specific labels and geometry types.

I like the idea! It's much cleaner and simplifies the graph by omitting the tree branch selectors necessary for my initial idea to work.

Tick Rate

I find tick rate very valuable as it seems necessary to control when:

Makes sense. I have two ideas on how to address this:

  1. A drop down menu. 'Refresh rate' - 0 | 1 | 5 | 15 | 30 FPS. (from Kinect's official specs sheet)
  2. A radio button. 'Refresh rate' - interactive | delayed. Interactive would always default to the max available FPS for a chosen FOV, delayed would be set to 1hz or similar.

Would this work? Any preference?

Keep Frames

I had imagined the keepframes functionality would be very valuable for when you want to keep a record of the mesh's past, e.g. if simulating erosion in the sandbox itself. This is probably pretty niche though, and I think could be accommodated in the definition itself using the delay-propogate component and a baking component. So no problems removing that option.

Interesting use case. Let's remove it for the sake of simplicity since this is definitely an advanced workflow and we can assume a higher level of Grasshopper skills.

philipbelesky commented 3 years ago

Alternatively, we could add logic which automatically connects GH buttons to the SandWorm component the moment it's placed on the canvas. I'm doing this in GreenScenario already and it's very straight-forward.

Ah, I forgot that was possible. Auto-adding the buttons sounds like the better option.

Double clicking on a slider allows users to define a slider domain and precise input. There is a tooltip hint about this functionality, but I can imagine this being non-intuitive for beginners. But then, it would be the same for a native GH slider - one also needs to know, that there are more options hiding under the right-click menu...

If this level of precision is possible, I don't have a strong preference for the native sliders. They might be slightly easier to use, but also slightly more annoying to setup. Maybe the 'double click for more' affordance could be shown as a tooltip on hovers?

Where else do we need sensor elevation as input? It would be coming out as one of the options from the SandWorm component anyway. Would this be enough?

This is probably not worth worrying about. From memory, I was mostly doing this when I wanted to 'save' other configurations and quickly swap between them, but that was just a very niche use case when debugging.

Makes sense. I have two ideas on how to address this:

  1. A drop down menu. 'Refresh rate' - 0 | 1 | 5 | 15 | 30 FPS. (from Kinect's official specs sheet)
  2. A radio button. 'Refresh rate' - interactive | delayed. Interactive would always default to the max available FPS for a chosen FOV, delayed would be set to 1hz or similar.

Maybe a dropdown along the lines of Interactive | 15 FPS | 5 FPS | 1 FPS | 0.2 FPS, with interactive (or 'auto' or 'max'?) as the default?

mariuszhermansdorfer commented 3 years ago

Cool, I'll add a drop down for Tick Rate as you suggested and work on automatic insertion/removal of output geometry parameters.

All sliders already have hover tooltips informing users about their purpose and additional controls hidden behind a double-click: image

Will post an update early next week with all functionality from #75 merged as well.

mariuszhermansdorfer commented 2 years ago

@philipbelesky, @BarusXXX new branch is live now: https://github.com/mariuszhermansdorfer/SandWorm/tree/feature/SandWorm2-0

image

What works:

  1. Adjusting sensor elevation manually
  2. Trimming of columns and rows through built-in sliders
  3. Analysis types (None, Elevation, Slope, Aspect)
  4. Contours (@BarusXXX sped this some, but it's still fairly slow for our needs)
  5. Frame averaging
  6. Gaussian blur
  7. Resetting

What doesn't:

  1. WFOV mode for Kinect Azure (confirmed bug)
  2. Kinect for Windows (untested)
  3. Calibration (haven't ported the code yet)
  4. Point Cloud output
  5. Analysis types (RGB, Cut & Fill)
  6. 'Color gradient range' slider doesn't do anything yet
  7. Water level surface is misaligned
  8. Rain density slider doesn't do anything yet
  9. Moving the 'Averaged frames' slider can result in random mesh movements along the Z axis (this is an old bug, still unresolved)
  10. Tested only with drawing units - millimeters, needs further testing
  11. Automatic adding of connected GH buttons when component is added to canvas is missing
  12. Tick Rate drop down is missing
  13. Automatic adding of output parameters based on what kind of geometry is being generated (mesh/water level/contour lines/flow directions etc.)

It would be great if you could test it and see if there are more bugs you can find. Any code changes should preferably come as PRs from separate branches since there is a lot to be worked on here.

mariuszhermansdorfer commented 2 years ago

TODO List, @philipbelesky, @BarusXXX feel free to add to this one and - if you have time - address some of these bugs.

mariuszhermansdorfer commented 2 years ago

@philipbelesky, @BarusXXX, this is how far I've come before my trip to Austria. The most recent commit is adding variable output parameters depending on the geometry type created (currently only setup to work with the water plane). It is a lot of hassle and still ends up being quite buggy. I'm not sure it is worth it versus just having 4 fixed outputs:

  1. terrain
  2. contour lines
  3. water surface
  4. water flowlines

Not sure about the current way of handling point cloud output either. Currently we are only displaying it, but it's not selectable downstream nor bakeable. Probably we should leverage Tarsier's code to spit out proper geometry and still maintain speed.

The biggest outstanding issue for now would be to test the current setup with Kinect for Windows (don't have access here) and figure out why WFOV is not working.

BarusXXX commented 2 years ago

I rand some test at home to check the z heights at various points in the field of vision from a 1.5m distance to the objective, and the z displacement is spot on, relative distances are great 1 to 3mm (mainly from jitter) the absolute varies slightly more than this and is worse the greater the distance from the camera (no surprise there). I will repeat these tests in the box next time I am at UNI to confirm. @mariuszhermansdorfer great work!

BarusXXX commented 2 years ago

@philipbelesky, @BarusXXX, this is how far I've come before my trip to Austria. The most recent commit is adding variable output parameters depending on the geometry type created (currently only setup to work with the water plane). It is a lot of hassle and still ends up being quite buggy. I'm not sure it is worth it versus just having 4 fixed outputs:

  1. terrain
  2. contour lines
  3. water surface
  4. water flowlines

Not sure about the current way of handling point cloud output either. Currently we are only displaying it, but it's not selectable downstream nor bakeable. Probably we should leverage Tarsier's code to spit out proper geometry and still maintain speed.

The biggest outstanding issue for now would be to test the current setup with Kinect for Windows (don't have access here) and figure out why WFOV is not working.

WFOV fails here due to the size of verticalTiltCorrectionLookupTable array still being the NFOV size. The reset proceedure in Solve Instance needs to be triggered automatically when the Kinect Type is changed by the user. Currently changing the dropdown injects the Kinect settings without disposing the sensor. The source of the main issue; the sensor being in disposed state when switching to WFOV is caused by the calibration.TransformTo3D returning a null to the translation vector and that clause triggering the catch to ?dispose of the sensor here.

@mariuszhermansdorfer in a nutshell regarding the WFOV issue: Source of the Issue: TransformTo3D returns a null if a point is invalid and we have invalid points in the corners in WFOV Soln 1: Handle the nulls to return a (X,Y,0) vector where X,Y are the respective positions of the mapped 2d Pixels using the elevation to place the the scaled pixels in the corners.

I have part of this solution in my unpublished changes: mapping of null corners

Soln 2: Handle nulls by collapsing mesh faces that are nulls (cleans the mesh)

mariuszhermansdorfer commented 2 years ago

@BarusXXX, thanks for pinning it down!

the reset proceedure in Solve Instance needs to be triggered automatically when the Kinect Type is changed by the user. Currently changing the dropdown injects the Kinect settings without disposing the sensor.

fixed here: https://github.com/mariuszhermansdorfer/SandWorm/blob/dfd08bf5b25db3c28b07fcb660255200a9f39555/SandWorm/ComponentsUI/SandWormComponentUI.cs#L183-L186 and here: https://github.com/mariuszhermansdorfer/SandWorm/blob/dfd08bf5b25db3c28b07fcb660255200a9f39555/SandWorm/Components/SandWormComponent.cs#L138-L144

Source of the Issue: TransformTo3D returns a null if a point is invalid and we have invalid points in the corners in WFOV Soln 1: Handle the nulls to return a (X,Y,0) vector where X,Y are the respective positions of the mapped 2d Pixels using the elevation to place the the scaled pixels in the corners.

I have part of this solution in my unpublished changes:

Could you please share the code? It seems, that the moment TransformTo3D encounters an invalid pixel it gives up entirely rather than gently returning a null and moving on to the next one...

mariuszhermansdorfer commented 2 years ago

Fixed WFOV support. We can keep track of which vertices are read as null here:

https://github.com/mariuszhermansdorfer/SandWorm/blob/93aa83581c6c3e803775a029461012cc36b87cc8/SandWorm/Utilities/KinectAzureController.cs#L117-L123

and use this information while setting up the mesh, to only create faces which have all 4 valid pixels: https://github.com/mariuszhermansdorfer/SandWorm/blob/93aa83581c6c3e803775a029461012cc36b87cc8/SandWorm/Utilities/Core.cs#L33-L35

It's interesting to see what kind of design choices Microsoft's hardware department took - one can see, that the FOV doesn't form a perfect circle and is cut off in the top and left corners respectively. This is consistent with how the Azure Kinect Viewer displays information, hence I don't think it's a bug on our side.

image

Currently trimming off rows and columns requires hitting the reset button for changes to take effect, I'll fix this in a commit tomorrow.

mariuszhermansdorfer commented 2 years ago

Added support for RGB analysis:

Narrow FOV: image

Wide FOV: image

The smaller overlap of RGB values with the depth image in the WFOV is consistent with Kinect's documentation: image

There is a slight performance penalty associated with transforming the RGB values to the depth space but overall it's fairly negligible. This should open up possibilities of using QR codes, or colorful beacons to guide model creation as already explored by @philipbelesky.

mariuszhermansdorfer commented 2 years ago

@philipbelesky, @BarusXXX Kinect for Windows is now fully supported as well. I suspected, that the depth data will also have some barrel distortion, which we never corrected for. Now it's taken care of. We get a butterfly-like shaped result consistent with Kinect Azure:

image

Also here, the RGB and the depth cameras don't overlap fully: image

Replacing our former logic with the new one made it not only more accurate, but also slightly faster. On my laptop, I'm consistently getting sub 10ms results with full resolution and sub 20ms with a reasonable amount of contour lines displayed.

mariuszhermansdorfer commented 2 years ago

Moved the calibrate function from an external button to a check box in the Sensor tab of the UI so that it is not that prone to accidental trigger. Unintended use of the calibration function can potentially result in undesired consequences and I never really liked having these two so close to each other.

Now, users need to first explicitly check the corresponding box and then hit the reset button to confirm, that they really want the plugin to set the Sensor elevation value for them. This is explained in the corresponding tool tip.

image

The underlying functionality still hasn't been implemented, so for now this is only a UI tweak.