Open jorgerobles opened 7 years ago
@openhardwarecoza Other way to implement a placement tracker could be with a reflectance sensor matrix https://es.aliexpress.com/store/product/10PCS-lot-QRE1113-new-in-stock/125218_32707304995.html --- maybe an i2c board with a PCF7485 ?
Personally, i'd go for a camera approach, since it doesn't involve licking anyone to get firmware support for the IR (;. More under our control
On Jun 26, 2017 10:17 AM, "jorgerobles" notifications@github.com wrote:
@openhardwarecoza https://github.com/openhardwarecoza Other way to implement a placement tracker could be with a reflectance sensor matrix https://es.aliexpress.com/store/product/10PCS-lot- QRE1113-new-in-stock/125218_32707304995.html --- maybe an i2c board with a multiplexer?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-310993189, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr291VTYrLVxhIh4LLJ_IHB0ISLT_Uks5sH2kDgaJpZM4OEXzH .
FYI, as part of the smoothie project, we have a sub-project we are working on that is related to that feature ( not named yet ) it's essentially a program that would run on a raspi ( or raspi zero ), and that offers a web server ( ajax api ) to allow other programs ( like laserweb, visicut or fabrica ) to take pictures ( via one or more webcam plugged into the raspi ) and have opencv operations done onto them ( like stitching of mosaics, reformation of fisheye pictures, corner/edge finding ). the whole assembly would be on the head of the fabrication machine. we are actually paying a consultant to work on this project and plan to release it both as a fully open-source project, and as a ready-to-use product
the final product would include :
users would then be able to just install this on the head, tell their favorite host program what it's ip is, and that host program would then be able to use it to :
just thought I'd mention this as you guys seem to be thinking about similar problems, and maybe knowing this is in the works might have some importance
On Mon, Jun 26, 2017 at 10:17 AM, jorgerobles notifications@github.com wrote:
@openhardwarecoza https://github.com/openhardwarecoza Other way to implement a placement tracker could be with a reflectance sensor matrix https://es.aliexpress.com/store/product/10PCS-lot- QRE1113-new-in-stock/125218_32707304995.html --- maybe an i2c board with a multiplexer?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-310993189, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGpFdjKNtDBjiRs8BDdmmulemny6AWJks5sH2kCgaJpZM4OEXzH .
-- Courage et bonne humeur.
That's a really good news. And seems a full blown contraption :)
http://recordit.co/JmShd46wkk ... not bad being JS
@jorgerobles recordit doesnt do it justice! Low FPS because of recordit. Trying it live with own camera - i was stunned. That multimarker demo realtime recognises over 40 targets!
Yes! The Earth demo (estimating position) is slower, but I think could work decently. I need to make a test with a shape, and check deviation of a manual aligned work. If that works well enough, could be gold. at least with diode cutter. A perfect setup could rotate all the artwork on LW to match paper marker. :D
https://trackingjs.com (eg https://trackingjs.com/examples/color_camera.html ) is also nice and fast. You gave me a new bug: computer vision was always out of reach, but now thats its JS its up my skill level at last! Dont know what i want to do with it, but i know i want to use it for something
😈
El 27 jun. 2017 19:50, "Peter van der Walt" notifications@github.com escribió:
https://trackingjs.com (eg https://trackingjs.com/ examples/color_camera.html ) is also nice and fast. You gave me a new bug: computer vision was always out of reach, but now thats its JS its up my skill level at last! Dont know what i want to do with it, but i know i want to use it for something
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-311434638, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYGOg4bGtOFWMaA9X9Xi92B4ClTYAks5sIUDxgaJpZM4OEXzH .
I've done some tests of alignment with JS aruco and are promising My setup:
Actually the tests are very basic:
Positioned camera over the Marker.
Moving material until get an acceptable position (x0, y0, pitch roll etc)
Several tests to calculate the camera offsets (Couple of them found -20,3mm in my setup)
Run!
Caveats:
But hey, I've got no Unicorn, but a stubborn Donkey!
Closer!
So, trying to not to derail UI even more, and making this useful, need some advices.
I've planned to add these settings on camera settings folder:
Intended usage:
What do you think?
offset [x,y]: will affect the gcode generation offset.
I'm uncomfortable with this. I thought mark recognition was to aid setting zero.
@tbfleming Should be better run a G92 X{-xoffset} Y{-yoffset}? I mean, that should be better of course :smile:. Is that the way to proceed?
I suspect G92 may cause confusion. @cprezzi ?
Some sci-fi could include applying detected transform to the document :smiley:
Hmmm. Do you plan on doing rotate? Unfortunately grbl can't do that using offsets. HAAS can, but that's a bit out of reach of our users...
Well my approach would be rotate the document on LW. Simplest, maybe. But is not a must. First things first, It will suffice to do the zero offset.
Since you're going to do rotate, might as well handle it in cam. Maybe a transform2d argument passed into preflight.
1 marker might not be enough. e.g.:
Yes, I foresaw that issue. That's why on first instance rotation will be skipped. A 2/3 point registration could be done. At least registered with current position, like reprap manual registration.
El 2 jul. 2017 8:11 p. m., "Todd Fleming" notifications@github.com escribió:
1 marker might not be enough. e.g.:
- Assume only a 1 degree error detecting rotation (I suspect it will be worse)
- You're cutting a 100mm by 100mm rectangle
- Lower-left corner of rectangle is perfectly aligned
- Lower-right corner's y value will be 1.7mm off
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312507671, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYJ7KiDd3P1lt48_BvZxy2LCakZO1ks5sJ91lgaJpZM4OEXzH .
If the goal is stock rotation, why not use the built in xyz probing already in all firmwares, using that to calculate rotation.
On Jul 2, 2017 8:20 PM, "jorgerobles" notifications@github.com wrote:
Yes, I foresaw that issue. That's why on first instance rotation will be skipped. A 2/3 point registration could be done. At least registered with current position, like reprap manual registration.
El 2 jul. 2017 8:11 p. m., "Todd Fleming" notifications@github.com escribió:
1 marker might not be enough. e.g.:
- Assume only a 1 degree error detecting rotation (I suspect it will be worse)
- You're cutting a 100mm by 100mm rectangle
- Lower-left corner of rectangle is perfectly aligned
- Lower-right corner's y value will be 1.7mm off
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312507671 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYJ7KiDd3P1lt48_ BvZxy2LCakZO1ks5sJ91lgaJpZM4OEXzH .
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312508168, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr26g-JhhAeKNRj80XtzGNOWv2sDRJks5sJ999gaJpZM4OEXzH .
Well. My primary objective is/was have a decent registration point to make cardboard models :) i didn't make my mind to other applications so far.
Seems G10 L2 should be used, http://linuxcnc.org/docs/2.6/html/gcode/gcode.html#sec:G10-L10 isn't it?
Yes. Refer to existing set zero implementation
On Jul 2, 2017 9:55 PM, "jorgerobles" notifications@github.com wrote:
Seems G10 L2 should be used, http://linuxcnc.org/docs/2.6/ html/gcode/gcode.html#sec:G10-L10 isn't it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312513333, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr2zBWJzQd6z4No2B1vercojYn9F7bks5sJ_WagaJpZM4OEXzH .
Anyways means that have to be set/reset on job start/end So ...
offset [x,y]: will affect the gcode generation offset.
There is a big difference between G10 L2
and G92
(and not every firmware handles it the same way):
G10 L2 P1
is setting the offsets only for G54 coordinate system to the given values.G92
is setting the actual position to given values and is shifting the offsets of all coordinate systems accordingly.@jorgerobles How about a button "find mark" on the jog tab, that searches for the mark, moves to the calculated zero position and sends setZero to the backend?
Yes, in order to affect offset, as camera is placed away from tool, could you add some command to set it? :)
El 2 jul. 2017 10:39 p. m., "Claudio Prezzi" notifications@github.com escribió:
There is a big difference between G10 L2 and G92 (and not every firmware handles it the same way):
- G10 L2 P1 is setting the offsets only for G54 coordinate system to the given values.
- G92 is setting the actual position to given values and is shifting the offsets of all coordinate systems accordingly.
- The frontend should not send these commands directly. Instead, the setZero command should be sent to the backend, which executes firmware specific commands.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312515662, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYCJyy7OJAPQHOI_G-cGSmvUiEOAyks5sKAATgaJpZM4OEXzH .
Overlapped answer! I think search for the code is overly complicated. I bet for jog to the marker, and once adjusted, click a set marker button. Also, as said before, the marker is not really zero. Well so I imagined, maybe I'm wrong
I could add a setOffset (or setPosition) command, if that helps.
This way you could just call setPosition with the wished position that is calculated from marker position and camera offset.
Cool!
But should the offset be restored upon job finish? If so some remember mechanism should be used, on frontend or directly in hardware with G10 Pn ?
El 2 jul. 2017 10:55 p. m., "Claudio Prezzi" notifications@github.com escribió:
This way you could just call setPosition with the wished position that is calculated from marker position and camera offset.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312516421, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYAqlVvqP1tOSZP4WNYFnAy9ZoVVpks5sKAO4gaJpZM4OEXzH .
I wouldn't restore the previous value. The process is equal to jogging to the stock edge and pressing set zero. This could also not be reversed.
Ok then :)
El 2 jul. 2017 11:09 p. m., "Claudio Prezzi" notifications@github.com escribió:
I wouldn't restore the previous value. The process is equal to jogging to the stock edge and pressing set zero. This could also not be reversed.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312517102, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYOPCXhm3ENh1Lh9wcKoNsfRf0sL6ks5sKAcJgaJpZM4OEXzH .
Implemented in https://github.com/LaserWeb/LaserWeb4/commit/867e5230384f19970810dbc3fb1781b17477ebfd and https://github.com/LaserWeb/lw.comm-server/commit/c88c052f0574acf2f0d3f745ac903910e5fa10a7
The param has to be an object with x, y, z, a (each optional). For example setPosition({x:20, y:10});
OMR branch alive https://github.com/LaserWeb/LaserWeb4/tree/OMR
There's also an scriptem artoolkit port. Very interesting. I've installed opencv for python to compare performance and precision. No very different for me right now
Nice!
JS Artoolkit precision seems far superior 🤤 Now I have to figure how to use it :D
If you get that building, then you'll also be able to build web-cam-cpp-src :)
Well I was planning using prebuilt js... Do I need to build?
If you need to modify it
Before moving to other engine or get lost on other issues, does anyone get the itch and want to test?
I ordered a camera. It will arrive in about months
LOL!
El 11 jul. 2017 10:51 p. m., "Ariel Yahni" notifications@github.com escribió:
I ordered a camera. It will arrive in about months
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-314567810, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYEH8Jj_IoxWvqzCBlM3MQdRe9Jneks5sM-BkgaJpZM4OEXzH .
@tbfleming Looking into Artoolkit as an upgrade to Aruco, I see it return a float[16] matrix from detected marker (http://augmentmy.world/artoolkit-distance-between-camera-and-marker) I understand that matrix is relative to marker size in order to get a real measure.
Could you help me with the maths to get the Yaw,Pitch,Roll and dimensions from that? I think will be key to perfect stock alignment...
Matrix is more general than yaw,pitch,roll. I don't think there's a reliable way to go back.
😥
El 14 ago. 2017 10:14 p. m., "Todd Fleming" notifications@github.com escribió:
Matrix is more general than yaw,pitch,roll. I don't think there's a reliable way to go back.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-322296794, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYAv_vZUqhRglXPzQ-SkygkiPoDghks5sYKq2gaJpZM4OEXzH .
Yeah, we have the unicorns back! What about this kind of thing https://github.com/jcmellado/js-aruco to jog detect work start, I mean, precisely warn of material placement in order to cut for example, previously printed cardboard or so?