Closed jon-dez closed 3 years ago
Thanks for the PR!
I wonder if there is not a simpler and maybe more efficient way of doing that. Instead of iterating over all views when a glfw touch callback is executed, I thought about just storing the xpos and ypos of the latest touch event, and handling it in each view independently in the main loop (in the frame() method). If you think about it, touch events only concern the touched view, others don't care about what's happening there.
Thanks for the PR!
I wonder if there is not a simpler and maybe more efficient way of doing that. Instead of iterating over all views when a glfw touch callback is executed, I thought about just storing the xpos and ypos of the latest touch event, and handling it in each view independently in the main loop (in the frame() method). If you think about it, touch events only concern the touched view, others don't care about what's happening there.
Hmm... I see what you are saying. We could store the touch event and have it be processed in the Application::mainLoop() instead of having to do the processing whenever the callback is issued. So a revised process will look something like this in the mainLoop:
bool Application::mainLoop(){
// stuff before
...
switch(touch_event.type){
case TOUCH:
Application::giveFocus(touch_event.x, touch_event.y);
break;
case DRAG:
touch_event.draggable->dragView(touch_event.dx, touch_event.dy);
break;
... // Others perhaps?
}
...
Application::frame();
...
}
I'm STRONGLY against doing the touch processing in the frame() method of the views, since that just adds another layer of complexity to the frame() method and will be a pain to deal with in the future when we want to extend the capabilities of the touch system. The method that is in place right now is already pretty simple and efficient because it is essentially a simple querying mechanism. We start off at the parent view, and ask a child one by one if the touch lies within its boundaries. If a child replies yes, then we check its children. Rinse and repeat. Also, the same mechanism works for both touch and drag events. It's pretty minimal and efficient already if you ask me :smile:
Also, the whole "touch checking" (or boundary checking) algorithm isn't recursive, which is a big plus. It is iterative as long as any getChildViewAtTouch(...) overrides do not call getChildViewAtTouch(...), which it shouldn't because it does not need to. We really do not want to ask every view if the touch lies within its boundaries because only one view should get "touched" at a single point in time, which is also why I'd rather keep the touch processing outside of the frame method.
I made an edit to the pseudocode above in case you had the previous in mind. It occurred to me that we should query the draggable when a drag event is FIRST captured by glfw, if not then the previous code will try to query a draggable at a new position every frame, which may be a different view from when the drag started.
Hi, sorry for the delay!
I don't accept contributions for master anymore, so I will be closing this issue.
I am rewriting the library starting (almost) from scratch, and #77 already aims to provide touch support for that version of the library. Feel free to chime in and contribute to the discussion and/or code there if you want!
Supporting a mouse could in turn translate to touch support. I don't have a development environment setup to test anything on the switch as of now... but reading touch screen input from the switch and interpreting it as a mouse, with regards to this pull request, should in theory allow for some touch support. There isn't full touch/mouse support yet, but I made some progress and that is what matters right?! :smile: