Closed ansis closed 10 years ago
This is something @incanus and I talked briefly about before I started, and it's really exciting to me.
Speaking exclusively for Android here. For me the doubt is in using WebGL without webviews. As far as I know the only way of displaying WebGL at the moment in an Android device is through Chrome for Android beta, and only in bleeding edge devices. The WebView class definitely doesn't support WebGL at the moment (WebView doesn't even allow JS by default). There might be more, but the only project I know that converts WebGL to Android apps is CocoonJS.
Maybe let's have a chat at some point next week about options and how it would fit in the mobile pipeline?
Hm, @fdansv isn't @ansis's direction in this ticket to use OpenGL-ES, not WebGL? So, no webview, no webgl, just opengl, and to run the JS exclusively for logic, which seems possible with direct V8 bindings?
@tmcw yes, you're right. I misunderstood one line above (this is what happens when I read posts on a weekend at 2am)
which seems possible with direct V8 bindings?
It does seem possible, but I don't know much about bindings from Java to V8. The couple of use cases of V8 that I've seen in Android use the Native Development Kit, to run C/C++ modules in an app. From there V8 can be called to run llmr.
Literature: http://www.ensufire.com/2013/03/vatedroid-embedded-v8-in-android.html
Hey @ansis, thanks for reaching out on this! I have been following the work in this repo at a very cursory level and awaiting the time that you guys felt it was ready to start talking mobile integration. Glad to see that we're getting there. The work here has been most impressive.
To orient you to the other side, I have been working on learning OpenGL ES as well as replacing our current tile renderer (in the tile layout sense of the word) on iOS with a 3D-capable as well as more performant one. This is important both in terms of moving seamlessly over to a hardware-accelerated backend as well as continuing layout of satellite tiles down the road. I have also been exploring buildings (as basic 3D objects) with the eventual goal of terrain meshes, as well as hoping to better understand WhirlyGlobe, upon which we've built MapBox Earth.
I am doing this by way of Apple's GLKit, which is a sweet API that takes away a lot of the grunt work with OpenGL. For example, async loading of textures with an arbitrary dispatch queue callback is one line of code. There is a solid setup for animation timing, as well as easily applying base lighting effects and integrating with first-class Cocoa objects (as the framework is itself in Cocoa). This framework was introduced in iOS 5, too, so it's everywhere we care about on iOS (our SDK supports back to iOS 5 as well). It's the recommended way to do OpenGL ES on iOS since 2011 and pushes off a lot of the busywork to Apple, while still allowing direct OpenGL ES calls as you feel the need.
My hope has been to integrate well with the work here and leapfrog worrying about Mapnik on iOS for raster rendering. What I had not yet considered was use of JavaScript -- I was imagining this being some porting over to straight OpenGL ES and/or via GLKit in order to write the least code on iOS.
The JS idea is intriguing. The true bridge has only come about in iOS 7, which is isn't a blocker (90%+ of devices will likely be there in the next year, if iOS 6 adoption is any indication), but is worth considering both in terms of reach as well as maturity. My gut instinct says this may be messy and/or complicated but I'm happy to be proven wrong because the shared maintainability could be incredible. So I'm definitely interested in exploring this.
I will start to look seriously into the JS side of things. Expect a general OpenGL devlog from me soon, too, as my work there matures.
From an Android point of view @fdansv, this is important to look into, too, but regardless we still need a raster renderer (for satellite if nothing else) so I don't think it changes the near term outlook of what you're working on. However even in a worst case, if JS could be used on iOS and not Android, that's still an efficiency win -- compare to the topcube library that we use for web views in TileMill Windows/Linux. It doesn't work on OS X but that's ok!
Rock! Thanks again @ansis.
The whole javascript idea is sort of crazy, thanks for looking into it.
I have also been exploring buildings (as basic 3D objects) with the eventual goal of terrain meshes
Crazy!
My gut instinct says this may be messy and/or complicated
Yeah, definitely could be.
Another option could be to port to C++ and share that between Android and and iOS, right? You wouldn't get to use the awesome stuff in GLKit though.
but regardless we still need a raster renderer
The WebGL work includes a raster renderer for satellite stuff as well, so eventually this could be considered together with the vector stuff.
A couple of quick basic iOS links http://blog.bignerdranch.com/3784-javascriptcore-and-ios-7/ http://asciiwwdc.com/2013/sessions/615 - video at https://developer.apple.com/wwdc/videos/?id=615
Calling JavaScript functions from Objective-C seems simple and relatively nice, and vice versa.
The big question I have is whether its possible to transfer binary data to a JavaScript ArrayBuffer without copying.
The true bridge has only come about in iOS 7, which is isn't a blocker (90%+ of devices will likely be there in the next year, if iOS 6 adoption is any indication), but is worth considering both in terms of reach as well as maturity.
Pretty impressive adoption rate. It could be possible to compile and bundle your own version of JavaScriptCore for pre-iOS 7. This is starting to get more hackish though. This guy did it, example code I haven't looked at.
It's definitely not clear what the best approach is here: We could port the entire renderer to C/C++ and then compile to JavaScript with emscripten and use the same code to power the Android+iOS backend. On Android+iOS, we could also include the "fontserver" logic to generate the signed distance field glyph images on the fly to avoid downloading them with every tile. As for compiling with emscripten: I'm a little worried about the whole setup because it'll be pretty hard to debug the JavaScript code if something goes wrong. Additionally, the web worker setup is very peculiar and it'll remain to be seen whether we can replicate something like this when compiling to JS to maintain the performance.
In any case, we'd have to code the interaction code in the native platform's environment (i.e. touch and other user events and how they influence the rendering).
The big question I have is whether its possible to transfer binary data to a JavaScript ArrayBuffer without copying.
Assuming you're referencing http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast @ansis?
Assuming you're referencing http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast
Yeah, but between the Objective-C and Javascript worlds.
Hey @incanus, @fdansv
The recent mobile work is very exciting! I think we should start talking about how WebGL, iOS and Android will all fit together.
I see the dream as having gl renderers on all three platforms, supporting the same features with the same api. From the outside this should look like one renderer available on multiple platforms. Luckily for us, graphics on all these platforms is just OpenGL-ES. This doesn't mean its going to be easy, but at least we don't have to reinvent everything.
Android and iOS are now both letting their javascript interpreters to be used directly (without webviews). It should be possible to render gl graphics from javascript, right? Could we use this to avoid porting the entire renderer, and just provide native bindings? In terms of performance, the bottleneck in the current webgl renderer is the actual rendering, which wouldn't be sped up by rewriting things natively. If this is viable, it would save a lot of porting and maintenance work. Or this could could create more problems than it solves. I'm very interested to hear your thoughts on this.
I think gl rendering is close enough that we should focus on getting it done and moving beyond image tiles. So, I think the first step is figuring out whether we could re-use the javascript renderer, or if that needs to be ported. Once we know that we should have a better understanding of what it would take to put out vector rendering on mobile devices.
cc @kkaefer