Abantech / Efficio

1 stars 1 forks source link

Find angles between rays formed by points in 3-dimensional space #1

Open GMelencio opened 7 years ago

GMelencio commented 7 years ago

My problem is rather challenging as it deals with points that are in 3-dimensional space.

I am providing an image and the problem setup described in the text below. Please note that, in the figure, points A, B, C and D are drawn as cubes for illustrative purposes only. They are actually points that are represented by as coordinates in the x, y, and z axes.

Problem Setup:

image

  1. Let A, B, C & D represent points in 3D space
  2. Let there be a cuboid of length L, width and W such that L and W are only long enough and wide enough to contain points A, B, C, and D in 3D space
  3. The distance between any two points will always be greater along L than along W
  4. Let there be a ray R that traverses the length of and bisects the width of the cuboid
  5. Let V1 be a ray that exists between A to B
  6. Let V2 be a ray that exists between B to C
  7. Let V3 be a ray that exists between C to D
  8. Let T be an arc that can be drawn at a arbitrary radius from B such that arc T extends from V1 to V2 along a plane perpendicular to the direction of ray R
  9. Let S be an arc that can be drawn at a arbitrary radius from C such that arc S extends from V2 to V3 along a plane perpendicular to the direction of ray R

Problem: Find a formula to determine the angle that can be used to draw arc T or arc S.

theo-armour commented 7 years ago

@GMelencio

Welcome to the world of linear algebra. Please do pen this link and scroll all the way down to the bottom of the page. Think about what you just looked at. Then come back here and tell me what footnote #19 links to.

If you really want a gentle introduction have a look at the Khan Academy site.

I say this because operations in linear algebra seem to morph, distort and come apart and you just moved something a teensy bit.

If you really want to get into linear algebra then set aside some years of time with good moments of peace and quiet. Or morph into a math genius. Your choice.

The only way I have been able to solve such problems is by focusing on a tiny highly specific task and throwing four or five days wading through dozens of stack exchange responses, blog posts and whatever. And then crossing my fingers the answer comes before the pain becomes unbearable.

I sort of went through that with this one

https://jaanga.github.io/terrain3/sandbox/elevations-view-oakland-gran-fondo/

For days the map starting spinning wildly at every sharp bend. Zero fun.

So I ask you: are you asking the right question?

Is not Efficient a Middleware app that interfaces between devices and apps?

If so, then shouldn't the apps be doing the linear algebra?

You've proved you can hack interface devices and apps. That's a skill few linear algebraists have.

So I suggest that you stick to what you are good at and let Unity, Leap, Holo or whatever do the math.

If you see an action you like in an app, then identify the incoming signals that created that action. Then save those signals and use them again.

It reminds me of the team at Google Translate that built the English <> Chinese translation system - which is a really good system. Of course, the team players can't say that because none of them speak Chinese. What the teams was good at doing was identifying and cataloging different signals (texts).

In any case, I think what is even more important is coming to terms with what Efficio can and cannot do. What devices can it listen to today, in the short term and in the long term. What apps can it send signals to now, in the short term and the long term.

This should probably be carried out in a Text or Markdown document(s) that multiple peeps can work on and is hosted on GitHub.

It might also be a good thing to do some research on product specification templates and processes.

Fingers crossed the spec we arrive at says we don't have to learn linear algebra.

GMelencio commented 7 years ago

Thanks for your on this Theo. First off, footnote # 19 is Matrix Algebra for Beginners - Part 1`

Now, I hear what you are saying and I appreciate that you are cautioning us against burdening ourselves with undue difficulty.

The question I am posing is thus: should we be relying on the devices (or the manufactuers thereof) to tell us what constitute an action we ought to recognize?

Is not Efficio a Middleware app that interfaces between devices and apps? If so, then shouldn't the apps be doing the linear algebra?

I suggest that you stick to what you are good at and let Unity, Leap, Holo or whatever do the math.

The devices and software already do the Lion's share of the math for us, but the problem is that they don't go far enough.. We would be relying on 'bonus features' of the devices (i.e. their built-in gesture detection routines) to trigger actions in the downstream application(s).

Were we to go this route our job would be much easier, but we add little to no value as we would be limiting our capability to make use of the data we get from the devices.

It doesn't speak much for us to be device independent if we rely on their "baked in gesture recognition".

Here's the scenario I'm guarding against: If LEAPMotion decides to recognize a "pinch" as two fingertips coming together such that the fingertips only need to touch, but the Intel RealSense requires that the fingertips touch AND the DIP joints are bent at at least a 20 degree angle - then what happens to our ability to recognize what a "pinch" is?

Please note that the scenario above is simplified for the sake of brevity, but perhaps too oversimplified (as it doesn't require linear algebra to figure it out). The point being is that we would have to rely on the device telling us that "Hey the user is doing ____", but the set of what it can tell is is limited to what is recognized by the device.

What if Tom Logan (or someone else) asks us to print letters on the screen based on the sign language alphabet? A reasonable request - yet no device out there can help us recognize it. What approach could we implement to distinguish between the letters "A", "E", "S" and "T" - even with just the Leap motion?

I'd have to explain to the customer that our device independence can only go so far - is that "loving the customer"? (as the LEAN Startup preaches) "You can use any device. Oh but with Device A you can do X and Y. With Device B, you can only do X".

If you see an action you like in an app, then identify the incoming signals that created that action. Then save those signals and use them again.

If I'm hearing you correctly, I think that between devices that's a very tall order - possibly one that would require a whole 'nother startup - so I must be misunderstanding you.

Though based on what I currently understand from your statement: To implement this we'd need an extremely rapid and scalable data warehouse to capture user action data to start. Then we'd need to use fuzzy logic in data mining to "guess" what the next action is, all while putting that in a feedback loop to make it smarter. We're talking machine learning. I'd happily work with Google on that if they'd like to take it on.

It reminds me of the team at Google Translate that built the English <> Chinese translation system - which is a really good system. Of course, the team players can't say that because none of them speak Chinese. What the teams was good at doing was identifying and cataloging different signals (texts).

For our case and it's not the same as translating between languages and this actually defends my counterpoint: With language, there's a finite set of words - and all of them are processed as text - that is, the medium used to "feed" the corpora is the same. The least common denominator among devices is to give us joint positions (and in some cases other things like bone orientation, facial expression, etc).

I hope you don't take my comments to be dismissive of your response. For what it's worth I would have had the same initial objection: I do NOT want to re-invent the wheel.

However, to have something truly useful and device independent we need to have something that can be applied universally. Until such an API/software exists that we can be used to query hand movement I'm not sure how else we can do it.

Fingers crossed the spec we arrive at says we don't have to learn linear algebra. Ah, but the beauty of my question is thus: I feel there's a very small set of problems we need to solve. In fact, I can't (currently) think of any situations that we'd need to use linear algebra to figure out after this.

I sincerely believe that once we've solved THIS problem, everything else is just coding in functions that re-use the same underlying method to identify any hand gesture a device can "see".

Once I have a way to find the the answer to the problem posed, I can use the same approach to tell all kinds of hand gestures. That part will be tedious work, but with the solution requested it will be doable.

theo-armour commented 7 years ago

@GMelencio

If Leap Motion and Microsoft and others are having trouble analyzing gestures then Abantech is also going to have trouble recognizing gestures.

If there were one small set of problems to solve then we would not need linear algebra. The fact that linear algebra, robot simulators and inverse kinematics exist is proof enough that gesture recognition is not just a small set of problems.

Even if the math is solved, and the recognition is solved the you still have the issues of all the devices and all the apps. Even if you had two million dollars, you would not even begin to know how to hire peeps who would not just be there to draw salaries

If Abantech is going to find a solution, then it is going to be by some clever trick that builds upon the current skills of hacking demos and processing messages.

The hack could be something to do with image processing with the raw data coming out of the devices

Or perhaps the most raw skeletal data produced by the devices could be looked at.

In either case, they probably should be looked at as streams of statistical data - streams of numbers where you look for observable patterns.

Put it another way: There's a lot of people - with lots of PhDs - out there today using linear algebra to try to figure out gestures. Without a lot of success.

If anybody can come up with a different way of tackling the problem, they are like to gather a lot of attention. Even a different way of visualizing the raw data would be useful.

My stepfather was a very good sailor and he won a lot of races. One of his tricks: when he found himself in the middle of the pack and all the boats were going in the same direction up wind, he tacked and went in the other direction - very often finding a good wind the other boats did not have.

We, too, need to find that good wind the others don't have...

BTW, the answer to your post is here: BVH Rreader