Open arianaa30 opened 8 years ago
Are you interested in obtaining the list of currently visible objects, everything the user can see? Or do you want to further constrain it to a smaller gaze box, a subset of what is visible?
I want to experiment both! I understand we have 96 degree of view in GearVR, so for a box it can be a 70ish degree or sth maybe.
Any thoughts on this? It is a bit urgent for me to achieve this.
I can add something to GVRPicker to pick against the camera's view frustum. It will return the list of currently visible objects. Will that work?
Sure. It would be best to make it as a configurable box (given a height and width as degrees?!) centered on the gaze. And pick whatever is within/intersects with that. If it is too much work, just visibility works.
Pull request #765 adds GVRPicker.pickVisible which will return the list of visible objects. Please test this pull request and see if it does what you want. If so we can do further refinement.
Thanks. Just that I'm not very familiar with the pull request. I went to your repository and downloaded the whole 'pickvisible' branch. Then I replaced the common files in my current Framework with that of the downloaded ones (but didn't remove the existing ones), but when I am trying to compile it, it has errors! What should I do? Aren't there any plans to update the master branch of Gearvrf so I can download the whole thing from scratch or something?!
OK, I replaced only the changed GVRPicker files and fixed an error due to my API being a bit older and things compiled Ok.
I was using the pickVisible()
in a similar way as pickObjects()
, but it is not working, and in my example, the non-visible objects get selected! The selection also lags a lot, and blinks. Is it fully tested to make sure it works, or maybe my test is problematic?
I do this in onStep()
: My objects turn into objectsRed if they are NOT visible.
for (GVRPickedObject pickedObject : GVRPicker.pickVisible(mGVRContext.getMainScene())) {
for (int i = 0; i < objectsRed.length; i++) {
if (pickedObject.getHitObject().equals(objectsRed[i])) {
if (!objects[i].isEnabled()) {
objectsRed[i].setEnable(false);
objects[i].setEnable(true);
break;
}
} else if (!objectsRed[i].isEnabled()) {
objects[i].setEnable(false);
objectsRed[i].setEnable(true);
break;
}
}
}
Here is how to get the pull request into your own local repository: git checkout -b pr768 master git pull https://github.com/NolaDonato/GearVRf.git x3dverts
The first line makes a new branch based on the Framework master. The second line pulls over my changes into that branch. I am not sure if just copying the files will work.
How many objects do you have? I will try and make a test like yours and see if mine lags too.
I dont have many objects. Just 4 of objects and 4 of the objectsRed. They are video objects in my real test. Can you please test with my onStep() code to make see if it works?
I have a simple solution for your problem.
Since you already know all the objects in your scene, all you need to do is to create a cube scene object and attach it to the camera rig. You then use the isColliding() call to detect a collision. I have modified the simple sample, to show you how it's done:
https://github.com/rahul27/GearVRf-Demos/commit/6aa4e1bdc26f3386c3c89a8c1c91c13b70a7c0a5
isColliding() is relatively inexpensive since it is a simple box-box test.
You will need to use the setPosition call appropriately to make sure that the cube is placed at an appropriate depth to ensure collision. For eg. the quad in the sample is at a depth of 3.0f, we therefore use:
cubeSceneObject.getTransform().setPosition(0.0f, 0.0f, -3.0f);
This will not clip to a frustum. It will report false positives where the back corners of the cube are out of view.
So does Rahul's approach work?! I din't understand it why using cubeSceneObject can help. Anyways If it fully works without false positives, let's test it. Let me know.
For an approximation, you can only test bounding boxes to be viewable -- this is, it's projection in camera frame falls in the specified gaze box -- similar to culling. If you care about pixel-level accuracy, you may need to test all triangles in the mesh (which takes time if not optimized).
@arianaa30 from the sounds of it your requirements seem simple. It works as long as you don't have a pixel level requirement like @danke-sra mentioned.
I updated the pull request to generate pick events for frustum picking. I added a new class GVRFrustumPicker which will return the list of colliders that fall within the camera view frustum. It works just like GVRPicker generating onEnter, onExit and OnInside events. If you look at the gvr-eyepicking sample and replace GVRPicker with GVRFrustumPicker you should get events for objects as they move in and out of the view frustum. I am working on making this more general to allow picking from any viewpoint but that is not working yet so don't call GVRFrustumPicker.setFrustum - it is documented but not functional.
OK. So basically instead of GVRPicker.pickVisible(mGVRContext.getMainScene())
, I have to just another class called GVRFrustumPicker
, right? It has pickObjects()
inside?
One thing is that currently I have 4 object. I might have 6, 10, ... so just wanted to make sure the cubeSceneObject works independently of number of objects.
Fortunately I was able to compile Framework, again, by replacing the modified files, which is good. Shall I still use pickObjects function as before or pickVisible?
If you are using the GVRFrustumPicker then pickVisible should work.
No, Rahul suggested I change it to work with events and I did. You will need to reapply the pull request because I have updated it with new code. Then you can use the picking events like the gvr-eyepicking sample but use GVRFrustumPicker instead of GVRPicker.
From: arianaa30 [mailto:notifications@github.com] Sent: Thursday, August 4, 2016 11:47 AM To: Samsung/GearVRf GearVRf@noreply.github.com Cc: Nola Donato nola.donato@samsung.com; Comment comment@noreply.github.com Subject: Re: [Samsung/GearVRf] View tracking not a as single ray (#762)
Fortunately I was able to compile Framework, again, by replacing the modified files, which is good. Shall I still use pickObjects function as before or pickVisible?
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/Samsung/GearVRf/issues/762#issuecomment-237646679, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AQw1S17pv8IT-OXoI6nXL7P76krLiUmXks5qcjO3gaJpZM4JY9GI.
It doesn't work for me. I'm guessing probably something is wrong with the logic of my code. I have video objects here.
onStep(){
for (GVRPickedObject pickedObject : GVRFrustumPicker.pickVisible(scene)) {
for (int i = 0; i < videolow.length; i++) {
if (pickedObject.getHitObject().equals(videolow[i])) {
if (!video[i].isEnabled()) {
videolow[i].setEnable(false);
video[i].setEnable(true);
break;
}
} else if (!videolow[i].isEnabled()) {
video[i].setEnable(false);
videolow[i].setEnable(true);
break;
}
}
}
}
Yes, my code has problems. While I fix that, any solutions for picking whatever intersects with front box of X degrees (say 60 degrees) as opposed to the whole screen which is visible?
@arianaa30 are you still having problems? I just tested this pull request https://github.com/Samsung/GearVRf/pull/765 and it worked fine for me.
You could try changing the following line from GVRPicker to GVRFrustumPicker to test:
Regarding your code, it would be difficult to understand what could be going wrong by just looking at your snippet. I would add logs to onStep() to check if the correct objects are picked:
Log.d(TAG, "Picked Object " + pickedObject.getName());
Also I am assuming that your have break statements in your code for a reason.
I thought maybe my snippet is slow, so I changed my snippet and used hashMaps instead, but nothing changed.
So looking at the link, is it a new way of doing things?! Previously I was attaching a eyepicking to objects and had a method (got from your eyePicking example):
attachDefaultEyePointee(object);
and
private void attachDefaultEyePointee(GVRSceneObject sceneObject) {
sceneObject.attachEyePointeeHolder();
}
Now for this one I should just attach it to the scene using these two lines?!
mainScene.getEventReceiver().addListener(mPickHandler);
mainScene.getMainCameraRig().getOwnerObject().attachComponent(new GVRPicker(gvrContext, mainScene));
Yup. That is a new events based picking model the framework. You should look at the sample to see how it works.
Also @NolaDonato has been kind enough to make a video that describes it:
You could also look at the developer guide:
The attachEyePointeeHolder function is a special case of attachCollider which attached a component to your scene object that makes it pickable. You have to attach a collider to every scene object you are going to pick. Then you should attach the listener for the pick events and a GVRFrustumPicker to generate the events.
Do this for all the scene objects you want to be pickable:
private void attachDefaultEyePointee(GVRSceneObject sceneObject) {
sceneObject.attachEyePointeeHolder();
}
Do this in your onInit to make the picker generate events and to handle them:
mainScene.getEventReceiver().addListener(mPickHandler);
mainScene.getMainCameraRig().getOwnerObject().attachComponent(new GVRFrustumPicker(gvrContext, mainScene));
Did you get the frustum picker to work for you? Do you need me to send you a full sample?
I didn't have a chance yet- so bisy yesterday. I will try to test out today. Thanks a lot for your followups!
I updated pull request #765 to let you specify the view frustum to use for picking in GVRFrustumPicker. You can call GVRFrustumPicker.setFrustum(float fovy, float aspect, float near, float far) to set the viewing area to be different from the camera. Let me know if that works for you. I tested it by changing the following lines in the gvr-eyepicking sample: // remove this line in onInit // mainScene.getMainCameraRig().getOwnerObject().attachComponent(new GVRPicker(gvrContext, mainScene));
// add these lines in onInit float znear = mainCamera.getNearClippingDistance(); float zfar = mainCamera.getFarClippingDistance(); GVRSceneObject headTransform = mainCamera.getHeadTransform().getOwnerObject(); GVRFrustumPicker picker = new GVRFrustumPicker(gvrContext, mainScene); picker.setFrustum(40.0f, 1.0f, znear, zfar); headTransform.attachComponent(picker);
Thanks. I will test it soon. I didn't have a chance to make the pickVisible()
work the other time. I'm sure the API works, but I'm not able to get my desired output.
Right now I'm most confused by the many approaches to do picking. I was so far using the older eyePicking example as my snippet shows above, doing things in onStep()
. But your examples on the website is based on a new class implementing ipickerHandler using onEnter () and onExit() and others. This looks very promising, but couldn't work for me.
-What are the differences between onPick and onEnter?
-And now that we have GVRFrustumPicker, shall we replace all GVRPickers here by the new Frustum class or leave the GVRPickers as is?! Then it will ask to implement unimplemented methods if we change the arguments.
So I'm a bit confused by these inconsistencies.
// public class PickHandler implements IPickEvents
// {
// public void onEnter(GVRSceneObject sceneObj, GVRPicker.GVRPickedObject pickInfo)
// {
// if(sceneObj.equals(videolow[2])){
// video[2].setEnable(true);
// videolow[2].setEnable(false);
// }
// }
// public void onExit(GVRSceneObject sceneObj)
// {
// if(sceneObj.equals(videolow[2])){
// video[2].setEnable(false);
// videolow[2].setEnable(true);
// }
// }
// public void onNoPick(GVRPicker picker) { }
// public void onPick(GVRPicker picker) { }
// public void onInside(GVRSceneObject sceneObj, GVRPicker.GVRPickedObject pickInfo) { }
// }
onPick is called once every frame and gives you a list of ALL the objects picked sorted by distance from the camera. onEnter is called multiple times per frame, once for each object that is entering the frustum. onExit is called once for every object leaving the frustum. onInside will be continuously called for all objects that are within the frustum.
GVRPicker picks using a ray, GVRFrustumPicker picks using a view frustum.
@arianaa30 You should try the latest framework. Please let us know the status of this issue from your point of view.
I have made a fix in GVRFrustumCuller in pull request #824. You will need this bring over this pull request to use GVRFrustumCuller because I accidentally swapped the field of view and aspect ratio arguments internally and it won't work right without the fix.
Did you try GVRFrustumCuller? Does it work for you?
I didn't have any more progress from that time. So didn't have a chance to make it work. You can also close the case.
~~Hi,
I kinda have the same issue as @arianaa30 had 6 months ago. I'd like to get the view frustum representing what the user actually views at each frame (the look at vector is not enough, I want the actual view frustum).
I tried out different things, like getting the headTransform matrix or the transform object from the camera rig but those are not updated as the user moves (perhaps they shouldn't because they are expressed in camera coordinates, i don't know).
In the end, I didn't managed to get the actual view frustum.
I tried out the GVRFrustumPicker you've implemented inside the eyepicker exemple, hoping that if it works i could dive into the code to find leads on how to get the frustum.
However, adding logs to the pick and onExit events, I observe that all the objects of the scene are always picked at the same time (no matter how much I really have in sight) and no onExit events seem to be fired (none of the log I put in the onExit method are printed) although when looking in the opposite direction of any scene object, I get no pick event.
The FrustumPicker seem to work pretty well (after I adjusted my testing code...) but I don't understand exactly how it can cull the objects correctly. The frustum matrix, which is got from the camera rig, might be a unit matrix (that's what i get when getting the headtransform matrix from the maincamerarig of the main scene in my app at least).
I'm using the version 3.1.1 version of the framework.
Do you have any clue to help me move forward ?
Thanks !~~
Hi,
Sorry, the experiments with the FrustumPicker were partial and I just realized investigating a bit more that I missed some crucial points. I'll continue my search from there and I'll hopefully find the view frustum.
Sorry for the inconvenience.
The view frustum is expressed in camera coordinates. The camera is located at 0,0,0 and looks down the -Z axis. The frustum picker tries to pick ALL the objects that are within the view frustum - not surprising it picked everything :-) The view frustum is expressed as six planes. You can get access to the field of view, near and far clipping planes and aspect ratio from the GVRPerspectiveCamera:
GVRCameraRig rig = scene.getMainCameraRig();
GVRPerspectiveCamera camera = (GVRPerspectiveCamera) rig.getCenterCamera();
float fovy = camera.getFovY();
float aspect = camera.getAspectRatio();
float near = camera.getNearClippingDistance();
float far = camera.getFarClippingDistance();
From there you can use JOML to get access to the clipping planes of the perspective view frustum.
Matrix4f perspMtx = new Matrix4f();
perspMtx.perspective(fovy, aspect, near, far, perspMtx);
Vector4f negXPlane = new Vector4f(); // plane X = -1 when mtx is identity
Vector4f posXPlane = new Vector4f(); // plane X = 1 when mtx is identity
Vector4f negYPlane = new Vector4f(); // plane Y = -1 when mtx is identity
Vector4f posYPlane = new Vector4f(); // plane Y = 1 when mtx is identity
Vector4f negZPlane = new Vector4f(); // plane Z = -1 when mtx is identity
Vector4f posZPlane = new Vector4f(); // plane Z = 1 when mtx is identity
perspMtx.frustumPlane(Matrix4f.PLANE_NX, negXPlane);
perspMtx.frustumPlane(Matrix4f.PLANE_PX, posXPlane);
perspMtx.frustumPlane(Matrix4f.PLANE_NY, negYPlane); perspMtx.frustumPlane(Matrix4f.PLANE_PY, posYPlane);
perspMtx.frustumPlane(Matrix4f.PLANE_NZ, negZPlane); perspMtx.frustumPlane(Matrix4f.PLANE_PZ, posZPlane);
Is this what you wanted?
From: Romaric Pighetti notifications@github.com Sent: Tuesday, March 7, 2017 3:25 AM To: Samsung/GearVRf Cc: Nola Donato; Mention Subject: Re: [Samsung/GearVRf] View tracking not a as single ray (#762)
Hi,
I kinda have the same issue as @arianaa30https://github.com/arianaa30 had 6 months ago. I'd like to get the view frustum representing what the user actually views at each frame (the look at vector is not enough, I want the actual view frustum). I tried out different things, like getting the headTransform matrix or the transform object from the camera rig but those are not updated as the user moves (perhaps they shouldn't because they are expressed in camera coordinates, i don't know). In the end, I didn't managed to get the actual view frustum.
I tried out the GVRFrustumPicker you've implemented inside the eyepicker exemple, hoping that if it works i could dive into the code to find leads on how to get the frustum. However, adding logs to the pick and onExit events, I observe that all the objects of the scene are always picked at the same time (no matter how much I really have in sight) and no onExit events seem to be fired (none of the log I put in the onExit method are printed) although when looking in the opposite direction of any scene object, I get no pick event.
I'm using the version 3.1.1 version of the framework.
Do you have any clue to help me move forward ? Thanks !
- You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/Samsung/GearVRf/issues/762#issuecomment-284695555, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AQw1S_ZHU992lnW6sJpJlVbOC0GWsdrBks5rjT6ogaJpZM4JY9GI.
This is perfect ! Couldn't expect a more detailed answer.
Thank you very much for the explanation and code samples !
Puting it simple, is there any ways to define gaze as a box, say a rectangle that might intersect with multiple objects? Basically I want to detect whatever objects are currently within user's view and user can see; not necessarily a single point as the end of a ray. With a single point I can't really achieve that.