Closed mariuszhermansdorfer closed 3 years ago
Here is my first stab at it. A few screengrabs showing various groups in their collapsed/expanded state:
The component doesn't do anything yet, it's just the UI. You can play with it in this branch
Thanks for the code and for the proposal - I've always been interested in these kinds of in-component custom UIs but haven't had a chance to see a working code sample. Some comments below:
In the cases of Calibrate and Reset, would these be better placed as custom UI buttons? I always thought that these kind of 'temporary' boolean parameters in Grasshopper were not very user-friendly - e.g. you needed to know to use the one-off button component rather than use a component that would permanently provide a true
value. Presumably, if they were custom buttons we could enforce the proper 'one touch' logic here.
While I haven't used the custom UI sliders proposed, my concern would be that traditional sliders would be a better UX in most cases given that:
I'm not sure about the logic and need to combine the Mesh with the analysis geometry - this makes it more difficult to seperate out each piece, particularly for new users who are not fully on top of tree path logic.
Given the component 'knows' what analysis is currently engaged, I would propose that we just dynamically add the output parameters needed with specific labels and geometry types. This should provide a good UX as the relevant parameters appear on-demand.
I find tick rate very valuable as it seems necessary to control when:
Without this affordance I've found it very easy to get into a frustrating situation where the definition is essentially locked/unresponsive as the definition takes longer to calculate than the mesh takes to generate.
One way around this would be to implement the entire logic as an asynchronous operation (per https://github.com/specklesystems/GrasshopperAsyncComponent; I've implemented this in https://github.com/philipbelesky/Caribou). Even in this case though, I think tick rate control may be necessary to run certain definitions.
I had imagined the keepframes
functionality would be very valuable for when you want to keep a record of the mesh's past, e.g. if simulating erosion in the sandbox itself. This is probably pretty niche though, and I think could be accommodated in the definition itself using the delay-propogate component and a baking component. So no problems removing that option.
Calibrate and Reset
Presumably, if they were custom buttons we could enforce the proper 'one touch' logic here.
I agree, these make more sense in the custom UI - it's just, that I don't have a button
UI element developed yet :) I could add this to the CustomComponent section, though. Alternatively, we could add logic which automatically connects GH buttons to the SandWorm component the moment it's placed on the canvas. I'm doing this in GreenScenario already and it's very straight-forward.
public override void AddedToDocument(GH_Document document)
{
PointF pivot;
pivot = Attributes.Pivot;
var generate = new Grasshopper.Kernel.Special.GH_ButtonObject();
generate.CreateAttributes();
generate.NickName = "Generate";
generate.Attributes.Pivot = new PointF(pivot.X - 250, pivot.Y - 11);
generate.Attributes.ExpireLayout();
generate.Attributes.PerformLayout();
document.AddObject(generate, false);
Params.Input[0].AddSource(generate);
}
Sensor Elevations
- Precise controls/adjustments can be made by the user.
Double clicking on a slider allows users to define a slider domain and precise input. There is a tooltip hint about this functionality, but I can imagine this being non-intuitive for beginners. But then, it would be the same for a native GH slider - one also needs to know, that there are more options hiding under the right-click menu...
- You can create multiple setting objects and 'reuse' the values easily (perhaps this is only useful during development?)
Where else do we need sensor elevation as input? It would be coming out as one of the options
from the SandWorm component anyway. Would this be enough?
Geometry Output Parameters
Given the component 'knows' what analysis is currently engaged, I would propose that we just dynamically add the output parameters needed with specific labels and geometry types.
I like the idea! It's much cleaner and simplifies the graph by omitting the tree branch selectors necessary for my initial idea to work.
Tick Rate
I find tick rate very valuable as it seems necessary to control when:
Makes sense. I have two ideas on how to address this:
Would this work? Any preference?
Keep Frames
I had imagined the
keepframes
functionality would be very valuable for when you want to keep a record of the mesh's past, e.g. if simulating erosion in the sandbox itself. This is probably pretty niche though, and I think could be accommodated in the definition itself using the delay-propogate component and a baking component. So no problems removing that option.
Interesting use case. Let's remove it for the sake of simplicity since this is definitely an advanced workflow and we can assume a higher level of Grasshopper skills.
Alternatively, we could add logic which automatically connects GH buttons to the SandWorm component the moment it's placed on the canvas. I'm doing this in GreenScenario already and it's very straight-forward.
Ah, I forgot that was possible. Auto-adding the buttons sounds like the better option.
Double clicking on a slider allows users to define a slider domain and precise input. There is a tooltip hint about this functionality, but I can imagine this being non-intuitive for beginners. But then, it would be the same for a native GH slider - one also needs to know, that there are more options hiding under the right-click menu...
If this level of precision is possible, I don't have a strong preference for the native sliders. They might be slightly easier to use, but also slightly more annoying to setup. Maybe the 'double click for more' affordance could be shown as a tooltip on hovers?
Where else do we need sensor elevation as input? It would be coming out as one of the options from the SandWorm component anyway. Would this be enough?
This is probably not worth worrying about. From memory, I was mostly doing this when I wanted to 'save' other configurations and quickly swap between them, but that was just a very niche use case when debugging.
Makes sense. I have two ideas on how to address this:
- A drop down menu. 'Refresh rate' - 0 | 1 | 5 | 15 | 30 FPS. (from Kinect's official specs sheet)
- A radio button. 'Refresh rate' - interactive | delayed. Interactive would always default to the max available FPS for a chosen FOV, delayed would be set to 1hz or similar.
Maybe a dropdown along the lines of Interactive | 15 FPS | 5 FPS | 1 FPS | 0.2 FPS
, with interactive (or 'auto' or 'max'?) as the default?
Cool, I'll add a drop down for Tick Rate
as you suggested and work on automatic insertion/removal of output geometry parameters.
All sliders already have hover tooltips informing users about their purpose and additional controls hidden behind a double-click:
Will post an update early next week with all functionality from #75 merged as well.
@philipbelesky, @BarusXXX new branch is live now: https://github.com/mariuszhermansdorfer/SandWorm/tree/feature/SandWorm2-0
What works:
What doesn't:
Tick Rate
drop down is missingIt would be great if you could test it and see if there are more bugs you can find. Any code changes should preferably come as PRs from separate branches since there is a lot to be worked on here.
TODO List, @philipbelesky, @BarusXXX feel free to add to this one and - if you have time - address some of these bugs.
public override BoundingBox ClippingBox
@philipbelesky, @BarusXXX, this is how far I've come before my trip to Austria. The most recent commit is adding variable output parameters depending on the geometry type created (currently only setup to work with the water plane). It is a lot of hassle and still ends up being quite buggy. I'm not sure it is worth it versus just having 4 fixed outputs:
Not sure about the current way of handling point cloud output either. Currently we are only displaying it, but it's not selectable downstream nor bakeable. Probably we should leverage Tarsier's code to spit out proper geometry and still maintain speed.
The biggest outstanding issue for now would be to test the current setup with Kinect for Windows (don't have access here) and figure out why WFOV is not working.
I rand some test at home to check the z heights at various points in the field of vision from a 1.5m distance to the objective, and the z displacement is spot on, relative distances are great 1 to 3mm (mainly from jitter) the absolute varies slightly more than this and is worse the greater the distance from the camera (no surprise there). I will repeat these tests in the box next time I am at UNI to confirm. @mariuszhermansdorfer great work!
@philipbelesky, @BarusXXX, this is how far I've come before my trip to Austria. The most recent commit is adding variable output parameters depending on the geometry type created (currently only setup to work with the water plane). It is a lot of hassle and still ends up being quite buggy. I'm not sure it is worth it versus just having 4 fixed outputs:
- terrain
- contour lines
- water surface
- water flowlines
Not sure about the current way of handling point cloud output either. Currently we are only displaying it, but it's not selectable downstream nor bakeable. Probably we should leverage Tarsier's code to spit out proper geometry and still maintain speed.
The biggest outstanding issue for now would be to test the current setup with Kinect for Windows (don't have access here) and figure out why WFOV is not working.
WFOV fails here due to the size of verticalTiltCorrectionLookupTable array still being the NFOV size. The reset proceedure in Solve Instance needs to be triggered automatically when the Kinect Type is changed by the user. Currently changing the dropdown injects the Kinect settings without disposing the sensor. The source of the main issue; the sensor being in disposed state when switching to WFOV is caused by the calibration.TransformTo3D returning a null to the translation vector and that clause triggering the catch to ?dispose of the sensor here.
@mariuszhermansdorfer in a nutshell regarding the WFOV issue: Source of the Issue: TransformTo3D returns a null if a point is invalid and we have invalid points in the corners in WFOV Soln 1: Handle the nulls to return a (X,Y,0) vector where X,Y are the respective positions of the mapped 2d Pixels using the elevation to place the the scaled pixels in the corners.
I have part of this solution in my unpublished changes:
Soln 2: Handle nulls by collapsing mesh faces that are nulls (cleans the mesh)
@BarusXXX, thanks for pinning it down!
the reset proceedure in Solve Instance needs to be triggered automatically when the Kinect Type is changed by the user. Currently changing the dropdown injects the Kinect settings without disposing the sensor.
fixed here: https://github.com/mariuszhermansdorfer/SandWorm/blob/dfd08bf5b25db3c28b07fcb660255200a9f39555/SandWorm/ComponentsUI/SandWormComponentUI.cs#L183-L186 and here: https://github.com/mariuszhermansdorfer/SandWorm/blob/dfd08bf5b25db3c28b07fcb660255200a9f39555/SandWorm/Components/SandWormComponent.cs#L138-L144
Source of the Issue: TransformTo3D returns a null if a point is invalid and we have invalid points in the corners in WFOV Soln 1: Handle the nulls to return a (X,Y,0) vector where X,Y are the respective positions of the mapped 2d Pixels using the elevation to place the the scaled pixels in the corners.
I have part of this solution in my unpublished changes:
Could you please share the code? It seems, that the moment TransformTo3D
encounters an invalid pixel it gives up entirely rather than gently returning a null and moving on to the next one...
Fixed WFOV support. We can keep track of which vertices are read as null here:
and use this information while setting up the mesh, to only create faces which have all 4 valid pixels: https://github.com/mariuszhermansdorfer/SandWorm/blob/93aa83581c6c3e803775a029461012cc36b87cc8/SandWorm/Utilities/Core.cs#L33-L35
It's interesting to see what kind of design choices Microsoft's hardware department took - one can see, that the FOV doesn't form a perfect circle and is cut off in the top and left corners respectively. This is consistent with how the Azure Kinect Viewer displays information, hence I don't think it's a bug on our side.
Currently trimming off rows and columns requires hitting the reset button for changes to take effect, I'll fix this in a commit tomorrow.
Added support for RGB analysis:
Narrow FOV:
Wide FOV:
The smaller overlap of RGB values with the depth image in the WFOV is consistent with Kinect's documentation:
There is a slight performance penalty associated with transforming the RGB values to the depth space but overall it's fairly negligible. This should open up possibilities of using QR codes, or colorful beacons to guide model creation as already explored by @philipbelesky.
@philipbelesky, @BarusXXX Kinect for Windows is now fully supported as well. I suspected, that the depth data will also have some barrel distortion, which we never corrected for. Now it's taken care of. We get a butterfly-like shaped result consistent with Kinect Azure:
Also here, the RGB and the depth cameras don't overlap fully:
Replacing our former logic with the new one made it not only more accurate, but also slightly faster. On my laptop, I'm consistently getting sub 10ms results with full resolution and sub 20ms with a reasonable amount of contour lines displayed.
Moved the calibrate function from an external button to a check box in the Sensor
tab of the UI so that it is not that prone to accidental trigger. Unintended use of the calibration function can potentially result in undesired consequences and I never really liked having these two so close to each other.
Now, users need to first explicitly check the corresponding box and then hit the reset
button to confirm, that they really want the plugin to set the Sensor elevation
value for them. This is explained in the corresponding tool tip.
The underlying functionality still hasn't been implemented, so for now this is only a UI tweak.
@BarusXXX, @philipbelesky, as you know, I've recently completed another Grasshopper project - GreenScenario. It uses custom UI elements to flatten the learning curve for newbies. Now, I'd like to adopt the same UI for SandWorm. Here are the basic elements in collapsed (left) and expanded views:
My suggestion would be to go with the following structure:
Traditional Grasshopper Input Parameters:
Group: Sensor
Group: Analysis
Group: Post-Processing
Traditional Grasshopper Output Parameters:
Doing this would allow us to merge three components into 1 (Options, Mesh, Point Cloud) and have a plug & play setup, which shouldn't present a barrier to beginner users. I've left out the
TickRate
andKeepFrames
parameters. Would you like to keep them, and if so why?Am I missing any other functionality? Would this change break any of the existing workflows with Fologram or color markers?