Closed ghost closed 7 years ago
Much of Stargate Atlantis was on zero point finding :)
Stargate Atlantis ! ZPM !
On Thu, Nov 10, 2016 at 5:44 PM, Todd Fleming notifications@github.com wrote:
Much of Stargate Atlantis was on zero point finding :)
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/30#issuecomment-259741454, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGpFWZ09MoBFPdTNOWrJkzkzTMk6Kc0ks5q80ntgaJpZM4Ku2XI .
Courage et bonne humeur.
http://www.stargate-sg1-solutions.com/wiki/Zero_Point_Module_(ZPM) - well, we just found the module name for that item then
Damn it, now i want to rewatch SG :(
Hmm we could start using https://github.com/peterbraden/node-opencv with tweaks needed to make it work on OSX (UVC patching... I've dabbled on that on the https://github.com/bqlabs/ciclop project). A serious programmer could be required, @Jesus89 are you there?
And by the way, I already bought this thingie 🍭
This could be of use in UVC streaming / OMR:
Support for a basic USB camera, as is in LW3, is what we are looking for. We have been exploring the glfx library to perform the image manipulations needed to de-skew our camera image and so far it looks promising.
It already uses canvas and glfx, and perspective distort enabled.
Nice! I didnt even know it was that far! So all thats missing is overlaying (underlaying) it behind the 3D grid? (Emblaser's application of the camera is to help position jobs using an overhead camera in the machines lid when opened upright)
On Jan 4, 2017 2:54 PM, "jorgerobles" notifications@github.com wrote:
It already uses canvas and glfx, and perspective distort enabled.
[image: camara] https://cloud.githubusercontent.com/assets/1706080/21642815/209b65b2-d285-11e6-868e-42e0e3e373d5.gif
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/30#issuecomment-270364015, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr2_7rGoshXtVUBCcHZzlA0Xsw90Mdks5rO5Z4gaJpZM4Ku2XI .
Yes, I'm sure @tbfleming could overlay into the workspace :)
Underlay will be easy. Overlay would require a reliable background color to key transparency. Will I get a rectangular bitmap which exactly covers the machine bounds? Or a rectangular bitmap and coordinates for the 2 corners?
@tbfleming We only need underlay. It will be used for material placement and/or design alignment
@jorgerobles How can we access the glfx parameters to create our own specific required de-skew / pinch setup?
@DarklyLabs it's currently "hardcoded" (as is a inner component property), but draggable points as the glfx demo could be done.
Can the params end up in a machine profile?
On Jan 5, 2017 11:15 AM, "jorgerobles" notifications@github.com wrote:
@DarklyLabs https://github.com/DarklyLabs it's currently "hardcoded" (as is a inner component property), but draggable points as the glfx demo could be done.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/30#issuecomment-270597672, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr29KmYBOcRMw7jI4EDlm1htGVHfiCks5rPLSdgaJpZM4Ku2XI .
@openhardwarecoza For sure! Any setting could be saved as a profile. Another thing (to think about) is to unlock some capabilities upon a machine profile, but that would tighten up and cap down generics (unless developed as a vendor plugin, but that's another issue)
With latest commit I've improved the video overlay and have warp dots (trekkies, behold!) to develop the @DarklyLabs requirements, but It would be better if we have some webcam video or images of what we want to achieve. That could be saved as any other setting.
@jorgerobles Thank you for development on this. We will need to apply a number of distortions to our image to achieve what we need. The camera is at 45 degrees to the workspace and also has a very wide angle lens.
What is the best way we can work with you on this. Supplying you an image of the workspace?
There is also a situation where users have more than one camera attached to their computer ie inbuilt camera and machine camera. Chrome does let you select the camera but it would be simpler and neater for customers to be able to select which camera is used as an option in LW.
@DarklyLabs Yes, a capture or two of the workspace with the webcam could help, with maybe the final effect you want to get. That would make me a better idea of the camera position and transformations.
About to choose the camera from inside LW, I don't know if it's possible (yet! https://simpl.info/getusermedia/sources/). Maybe the Electron build could be capable of.
Well, now it has capability of select what device to use, and a second set of warpdots to select Before (rects) and then After (dots) on perspective transformation.
toolUseVideo: boolean
refactored to toolVideoDevice: null|deviceId
.
Added toolVideoPerspective: null|{before:[8],after[8]}
to settings, so is saveable with profiles.
@DarklyLabs I dabbled around with the controls and moving transformation points you could also handle rotation :)
@jorgerobles Great work. I have attached a pic of our typical workspace. The paper grid is the work area we need to de-skew and present under the viewport grid.
You can see that there is some lens distortion that needs compensating for, along with the perspective.
You guys might want to take a look at openpnp , they do this already and it works pretty well, probably some code to re-use there.
On Sun, Jan 8, 2017 at 2:01 AM, DarklyLabs notifications@github.com wrote:
@jorgerobles https://github.com/jorgerobles Great work. I have attached a pic of our typical workspace. The paper grid is the work area we need to de-skew and present under the viewport grid.
You can see that there is some lens distortion that needs compensating for, along with the perspective. [image: e2_workspace2] https://cloud.githubusercontent.com/assets/14358950/21746464/1ac78edc-d59a-11e6-99f8-2f9fcb25ebdc.jpg
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/30#issuecomment-271121587, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGpFfdL9CHcfXZPvFRIiIpHHO1DeeSjks5rQDV5gaJpZM4Ku2XI .
-- Courage et bonne humeur.
@arthurwolf Wow. that looks very impressive. Especially the automatic calibration feature as seen here: https://www.youtube.com/watch?v=LNa2LNSpa68
Thank you @arthurwolf, It's worth a look (or a dozen more :smile:)
BTW that's OpenCV (I mentioned it a while ago). There's a compilation for OpenCV on node, so It must be done "serverside". There's a collateral problem with OpenCV on OSX, and UVC devices, that I don't know If has been resolved (I found a patch for python, I need to learn C++ module compiling for Node this year) @DarklyLabs thank you for the image. We can work on it in the meantime.
@tbfleming I found this https://github.com/dminor/webgl-lens/blob/master/index.html that seems to match your GL wizardry. Could the fishEye be reversed?
@jorgerobles Looks promising. Can this code be included so we basically supply the parameters as shown in the demo to correct for the lens?
Please note, we still need the current perspective adjustments you have in there.
@DarklyLabs yes, I'm trying to adapt it, but I'm no webgl savvy. The fx are stackable and works on GPU should be fluid enough.
@tbfleming, Can you take a look to the GL thingies? I'm stuck at using glfx with this library. I don't know how to make GLFX play nice with other webgleffects (something related with useProgram) Maybe is better to ditch glfx, as is not much active. I found this as an alternative (http://jsfiddle.net/rjw57/A6Pgy/) Broken implementation at https://github.com/jorgerobles/laserweb4/tree/WIP_fisheye
@jorgerobles My queue is full right now. Any library that takes images in and produces images out should be ok, as long as it doesn't try to take over the UI. If they use WebGL internally, then their WebGL code should create its own WebGL off-screen context; it won't need to hook into our WebGL code.
One thing to look out for: lots of WebGL code (even ReGL) doesn't dispose of the main WebGL context properly, causing Chrome to have issues. The WebGL spec was a little short-sighted. They hid the proper way to dispose inside a poorly-documented extension meant for another purpose: WEBGL_lose_context.
@tbfleming ok then. I will try to look for alternatives, but as I have no idea about webgl any help will be welcome when you get freed.
@jorgerobles I was trying to fix the WebGL crash issue. I am thinking that creating a new texture context each time forces it to lose context. Solution: Let the texture be defined above the capture method that is polled each time and it should fix the issue.
@tbfleming I've setup an test repository at https://github.com/jorgerobles/ReglTests With the Barrel and Perspective migrated to REGL. This is as far I can get.
Help wanted:
After a lot of pain we have a functional Barrel Distort / Perspective correction. It uses REGL :sob: but performs well. Todd, take a look whenever you can, I'm sure it could be heavily optimized.
@jorgerobles Just did a quick test. Looks great! Thank you for the hard work. We will test it and let you know how it goes.
@tbfleming Understand you are extremely busy. Any eta on when we can see the camera underlaid in the main viewport?
Thank you @DarklyLabs, I have to do further tests, seems perspective distort needs tweaking.
Definitely, something is wrong with the perspective distort. I've adapted from glfx and something slipped thru it. :|
I just tried it. The video device dropdown is empty. My cam works on https://simpl.info/getusermedia/sources/
@tbfleming moving DEV chat to https://github.com/LaserWeb/ReglTests/issues/1
This feature is working, but maybe is not enough. The image gets poor output due no full mipmapping on webgl.
@jorgerobles How can we test this?
@DarklyLabs as using the controls on the LW4 UI (will be improved, for sure) I've put a test bench repo at https://github.com/LaserWeb/ReglTests. Please download and run the installation procedure. Tell me if working properly or more instructions needed.
@jorgerobles pretty cool!
For my camera, i think F may need to have it current far right be a center point of the scale, and add a opposite direction too? if thats possible.
Ie at full right F-slider, I wish I could go just a litle further - hope that makes sense
a & b might need smaller steps if thats possible
@openhardwarecoza the barrel is based on https://github.com/jywarren/fisheyegl, and seems have those limits. (I wish I could fully understand those lens maths, but no 😿 ) I've tweaked css to make the sliders wider, so they can finely positioned. I've also tweaked a bit F to get more values, but it only does weird stuff. I was hoping @tbfleming could take a look when get freed (webgl optimization and so). Seems very busy these days.
OpenCV is known to have some serious algorithms for lens correction, and http://lensfun.sourceforge.net/calibration/ too, but none of them are JS.
@DarklyLabs Did you tested with the test app? how is it working? Any workable result?
@jorgerobles Back at the lab today. Will test and advise. Thanks!
@jorgerobles Our initial tests look very promising! Very well done. I have two questions: 1: Is it possible to expose numerical values for the perspective settings? The drag controls are perfect for getting into the ball-park but are difficult to fine tune precisely. 2: Is there any control over the image resolution that is captured?
@jorgerobles Also, FovX and FovY appear to have no effect on the image. They can be removed I suspect.
Action: Port / Rewrite functionality from LW3 + Enhancement
Present Source: https://github.com/LaserWeb/LaserWeb3/blob/master/public/js/viewer.js#L40-L81 and https://github.com/LaserWeb/LaserWeb3/blob/master/public/js/viewer.js#L434-L490
Desired result: Top down bed view to line up operations, but also investigate functionality with USB microscope webcams for edge detections / stock sizing / zero point finding