Abantech / abantech.github.io

Abantech public repository
http://abantech.net
1 stars 2 forks source link

Device Exploration #7

Open jamesAbantech opened 7 years ago

jamesAbantech commented 7 years ago

All,

I think that we should start to investigate other devices to understand what they can offer us. This thread is to discuss different devices that we've discovered along the way.

theo-armour commented 7 years ago

@Abantech/core

Hot damn! That device between my legs is the most explosive thing ever. ;-)

Thinking out loud:

There's going to be a zillion devices for AR/VR. Developer editions will cost $5,000. And all of then will turn turtle. whatever.

Let's invent our own device - that only exists in the Matrix. And then clever James makes it talk to meatworld devices.

GMelencio commented 7 years ago

@Abantech/core https://github.com/orgs/Abantech/teams/core

We don't have to buy all the devices, as you've pointed out. there are emulators out there that allow us to develop to them, without needing the physical device ourselves.

I agree with James that it's a worthwhile idea to investigate what's out there.

As an after effect of their turning turtle we can spot trends among the ones that succeed to determine what devices are more worth our time to focus on.

Greg Melencio CEO and Founder Abantech LLC 571-402-4688

On Thu, Aug 25, 2016 at 4:26 AM, Theo Armour notifications@github.com wrote:

@Abantech/core https://github.com/orgs/Abantech/teams/core

Hot damn! That device between my legs is the most explosive thing ever. ;-)

Thinking out loud:

There's going to be a zillion devices for AR/VR. Developer editions will cost $5,000. And all of then will turn turtle. whatever.

Let's invent our own device - that only exists in the Matrix. And then clever James makes it talk to meatworld devices.

— You are receiving this because you are on a team that was mentioned. Reply to this email directly, view it on GitHub https://github.com/Abantech/abantech.github.io/issues/7#issuecomment-242315056, or mute the thread https://github.com/notifications/unsubscribe-auth/AEfoMTT3ZclHYHoyCO48IpjXbKGHIIJYks5qjVGsgaJpZM4JrJyL .

theo-armour commented 7 years ago

@GMelencio

If you just want to indicate that you agree with a post then it's best to use the reaction icons that are available via GitHub. There has been much discussion about peeps writing long messages when just a ++1 would do.

The purpose of a post is always to add new information to the discussion

@jamesAbantech

There are many many devices out there.

HCI NUI devices

The big issue: You pick the best devices - and then some also-ran walks away as the market winner. ;-(

Perhaps the interesting thing is thinking about being the middleware.

Then you want many data sample from many types of HCI devices.

And you want access to a bunch of API calls.

And your job is to mix and match comms between the two - quite seamlessly - even when they are on different continents.

GMelencio commented 7 years ago

@theo-armour:

The fact that we don't have a skeletal model/standard to follow will become an impediment very soon; need your help in determining what skeletal model we should subscribe to. BVH seems a bit too verbose, Mr doob's i.ementation is not a widely accepted standard, I'm asking g a favor of you to help us determine what we should build our skeletal model against. Appreciate the options you can provide based on your k ledge/research. Will be a huge help to us since we're focused on other implementation efforts right now. Could you help us out on this?

Thanks in advance.

On Fri, Aug 26, 2016, 3:06 AM Theo Armour notifications@github.com wrote:

@GMelencio https://github.com/GMelencio

If you just want to indicate that you agree with a post then it's best to use the reaction icons that are available via GitHub. There has been much discussion about peeps writing long messages when just a ++1 would do.

The purpose of a post is always to add new information to the discussion

@jamesAbantech https://github.com/jamesAbantech

There are many many devices out there.

HCI NUI devices https://www.google.com/search?q=hci+nui+devices&client=tablet-android-google&prmd=isnv&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjWiqDdtN7OAhUG2GMKHdcOBsQQ_AUIBygB&biw=635&bih=903

The big issue: You pick the best devices - and then some also-ran walks away as the market winner. ;-(

Perhaps the interesting thing is thinking about being the middleware.

Then you want many data sample from many types of HCI devices.

And you want access to a bunch of API calls.

And your job is to mix and match comms between the two - quite seamlessly

  • even when they are on different continents.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/Abantech/abantech.github.io/issues/7#issuecomment-242647186, or mute the thread https://github.com/notifications/unsubscribe-auth/AEfoMfFXLM2uNhF0w94dgZ3L6QHll1Pqks5qjpBrgaJpZM4JrJyL .

theo-armour commented 7 years ago

@GMelencio

This topic is about the other side of the equation - the devices.

Kindly remember that issues - ultimately should be closable - and so staying on topic helps.

So it might be a good thing to open a new issue regarding what happens at the app end.

But regarding the question:

what skeletal model we should subscribe to [??]

Could the answer be: support whatever the device or app wants?

If Unity wants things there way and RealSense wants things another way, should not the Abantech added value be that Efficio passes motions seamlessly?

GMelencio commented 7 years ago

Kindly remember that issues - ultimately should be closable - and so staying on topic helps. So it might be a good thing to open a new issue regarding what happens at the app end.

Understood. We're getting into the groove of things and are learning so forgive this final transgression as this happens to be a rather urgent matter given that James is about to leave and become unavailable for some time. We need help on this badly and we need to get at least a start on it within the next 36 hours - a decision if nothing else.

If Unity wants things there way and RealSense wants things another way, should not the Abantech added value be that Efficio passes motions seamlessly?

Not sure if you got what I was asking but we cannot possibly take all the various types of ways the devices represent data and somehow magically harmonize them into some way that makes sense to the application and end user. This goes against the very definition of what ought to be a 'standard'.

If you're asking should we be able to interpret actions (not the baked in gestures) - ABSOLUTELY - have at look at the new 'Core" code that James wrote https://github.com/Abantech/Efficio/blob/master/Core/CPP/PinchDetector.cpp and you will see that WE - based purely on the math of relative positions of one's digits (and NOT the baked-in leap plugin gestures) - are the ones that figure out what is the action being performed by the user. In this case a pinch is defined as two fingers less than 25mm apart. This will work for any device that can detect fingers.

But it gets much more complex than that as we start to determine where the hand is relative to the face/camera. We need to have a representation of the human skeleton. Why? The device can be anywhere in the room; the Leap can be velcro'ed onto the back of the Head Mounted Display, while the realsense and kinect can be stationary. So we NEED to create a skeletal model so that we can draw the hands at positions relative to the camera (or 'viewport') correctly by placing the hands on the scene relative to the viewport correctly. A skeletal model is needed for this so that we can place the hands in the right place relative to the head and eyes... Can we write our own skeletal model? Of course we can, in fact, for lack of time we already kind of are doing this - I dread that we are going down this path as I do not want to re-invent the wheel. If there's something out there that's easy to comply with and that is growing in popularity with the community, I would opt for that - should we not?

To my detriment I confess I don't do much reading of 3D/NUI/VR articles - and you're right, that I may not be able to develop that habit - but I'm asking, based on what YOU know, what you've seen out there, what your peeps might suggest, is there a skeletal model we ought to use to populate the joint positions returned by the devices?

Yuo've been a huge asset to us in terms of what goes on in 3D/NUI so I trust your judgement implicitly. Moreover we are essentially asking you to answer the question answered by the whitepaper you wriote for LeapMotion in

  1. The only difference is we're looking for prescriptive guidance and at this critical juncture we seek your advice - not much unlike the greeks who sought the Oracle's advice before plunging into battle - What do you advise we should use to accomplish this?

Greg Melencio CEO and Founder Abantech LLC 571-402-4688

On Thu, Sep 1, 2016 at 5:31 AM, Theo Armour notifications@github.com wrote:

@GMelencio https://github.com/GMelencio

This topic is about the other side of the equation - the devices.

Kindly remember that issues - ultimately should be closable - and so staying on topic helps.

So it might be a good thing to open a new issue regarding what happens at the app end.

But regarding the question:

what skeletal model we should subscribe to [??]

Could the answer be: support whatever the device or app wants?

If Unity wants things there way and RealSense wants things another way, should not the Abantech added value be that Efficio passes motions seamlessly?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Abantech/abantech.github.io/issues/7#issuecomment-244026863, or mute the thread https://github.com/notifications/unsubscribe-auth/AEfoMZTLeX6Ek4XFrzSMtSXrvFsO8i0dks5qlpuAgaJpZM4JrJyL .

GMelencio commented 7 years ago

@theo-armour: any ideas on the above I forgot to tag you for the question.

@Abantech/Core: Here's the best I could find on scholarly articles about representing skeletal structure of a human. Seems BVH, while verbose, is the one that is most widely discussed (and there's sample c++ code for). I'd opt for SKL but not sure about the community support there is around it.

On Thu, Sep 1, 2016 at 6:06 AM Greg Melencio greg@abantech.net wrote:

Kindly remember that issues - ultimately should be closable - and so

staying on topic helps. So it might be a good thing to open a new issue regarding what happens at the app end.

Understood. We're getting into the groove of things and are learning so forgive this final transgression as this happens to be a rather urgent matter given that James is about to leave and become unavailable for some time. We need help on this badly and we need to get at least a start on it within the next 36 hours - a decision if nothing else.

If Unity wants things there way and RealSense wants things another way, should not the Abantech added value be that Efficio passes motions seamlessly?

Not sure if you got what I was asking but we cannot possibly take all the various types of ways the devices represent data and somehow magically harmonize them into some way that makes sense to the application and end user. This goes against the very definition of what ought to be a 'standard'.

If you're asking should we be able to interpret actions (not the baked in gestures) - ABSOLUTELY - have at look at the new 'Core" code that James wrote https://github.com/Abantech/Efficio/blob/master/Core/CPP/PinchDetector.cpp and you will see that WE - based purely on the math of relative positions of one's digits (and NOT the baked-in leap plugin gestures) - are the ones that figure out what is the action being performed by the user. In this case a pinch is defined as two fingers less than 25mm apart. This will work for any device that can detect fingers.

But it gets much more complex than that as we start to determine where the hand is relative to the face/camera. We need to have a representation of the human skeleton. Why? The device can be anywhere in the room; the Leap can be velcro'ed onto the back of the Head Mounted Display, while the realsense and kinect can be stationary. So we NEED to create a skeletal model so that we can draw the hands at positions relative to the camera (or 'viewport') correctly by placing the hands on the scene relative to the viewport correctly. A skeletal model is needed for this so that we can place the hands in the right place relative to the head and eyes... Can we write our own skeletal model? Of course we can, in fact, for lack of time we already kind of are doing this - I dread that we are going down this path as I do not want to re-invent the wheel. If there's something out there that's easy to comply with and that is growing in popularity with the community, I would opt for that - should we not?

To my detriment I confess I don't do much reading of 3D/NUI/VR articles - and you're right, that I may not be able to develop that habit - but I'm asking, based on what YOU know, what you've seen out there, what your peeps might suggest, is there a skeletal model we ought to use to populate the joint positions returned by the devices?

Yuo've been a huge asset to us in terms of what goes on in 3D/NUI so I trust your judgement implicitly. Moreover we are essentially asking you to answer the question answered by the whitepaper you wriote for LeapMotion in

  1. The only difference is we're looking for prescriptive guidance and at this critical juncture we seek your advice - not much unlike the greeks who sought the Oracle's advice before plunging into battle - What do you advise we should use to accomplish this?

Greg Melencio CEO and Founder Abantech LLC 571-402-4688

On Thu, Sep 1, 2016 at 5:31 AM, Theo Armour notifications@github.com wrote:

@GMelencio https://github.com/GMelencio

This topic is about the other side of the equation - the devices.

Kindly remember that issues - ultimately should be closable - and so staying on topic helps.

So it might be a good thing to open a new issue regarding what happens at the app end.

But regarding the question:

what skeletal model we should subscribe to [??]

Could the answer be: support whatever the device or app wants?

If Unity wants things there way and RealSense wants things another way, should not the Abantech added value be that Efficio passes motions seamlessly?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Abantech/abantech.github.io/issues/7#issuecomment-244026863, or mute the thread https://github.com/notifications/unsubscribe-auth/AEfoMZTLeX6Ek4XFrzSMtSXrvFsO8i0dks5qlpuAgaJpZM4JrJyL .

theo-armour commented 7 years ago

@GMelencio

If Efficio-enable is to be true middleware then should it not be format agnostic?

things happening. bye for now

The task is to listen to the data coming in from devices, process the data and then pass on the commands to the apps API

Images form RealSense become animations in Maya., hand gestures from Leap motion become game commands in Unity, Kinect movements become walkthroughs in Autodesk 360.

You start with one connection type and keep adding more.

So regarding character animation. let's take the example of Leap and Unity. You look at what the devs are doing and build on that. Try and see if you can do the same with a Kinect.

I look at the Collada - the 3D file format. It tried to cover all bases. And it has become way too complicated.

What I could see doing is assembling a bunch of raw data from different devices and working out good ways of listening really effectively.

And on the skeletal/API side, I would look at what most peeps are doing with Unity and go with the flow.

Also I would investigate what Japanese animators - sqoosha comes to mind - and see what they are using for animated porn or whatever