Open saniul opened 9 years ago
Ugh, on-device performance is horrendous :( Inverting every pixel in a 400x400 image using
function(pixel) {
pixel.red = 1.0 - pixel.red
pixel.green = 1.0 - pixel.green
pixel.blue = 1.0 - pixel.blue
return pixel
}
takes ~14 seconds on an iPad Air.
Even just mapping over all pixels without modifying them takes 6-7 seconds :| Most time is spent in -[JSValue callWithArguments:]
Added a sample scene to Examples/
I'm not super surprised, but that's a bummer. Two ideas, depending on exactly what's slow here:
Pixel
objects that's slow, you could try making these functions consume function(x, y, r, g, b)
instead.Yup. I think 2. is going to be the right decision in the end, but I'll try 1. first.
(Btw, if you're going with 2, on React Native we saw that serializing to JSON and transferring only one string over the bridge was faster than building up the arrays/objects manually using the JSC API.)
@spicyj thanks for the tip!
Interestingly, there's actually a standard ImageData
API. That's what <canvas>
elements can give you; it's also what Paper.js exposes.
CIFilter might come in handy. It covers quite a few image processing tasks. Here's a link to the list of filters: https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/uid/TP40004346
Yep, good point. Can only filter things that are already raster of course, and CI isn't suitable for realtime applications, but there are ways to mitigate both those issues if we find we need it.
On Mar 25, 2015, at 10:54 PM, Kevin Barabash notifications@github.com wrote:
CIFilter might come in cover quite a few image processing tasks. Here's a link to the list of filters: https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/uid/TP40004346
— Reply to this email directly or view it on GitHub.
@andymatuschak I didn't realize that. I did a quick search and Apple has some suggestions on how to make it work for realtime applications. They suggest using an EAGL context so all the bits stay on the GPU. The details are outlined in "Creating a Core Image Context on iOS When You Need Real-Time Performance" on https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html.
There are probably also some situations where you don't necessarily need to apply real-time effects in which case plain old CI would work just fine.
Right, thanks! I was responsible for UIKit's rendering concerns at Apple, so I'm familiar, just not looking forward to digging that ditch again. Ah well.
Seems like there’s three use cases here:
filters
vs. compositingFilters
. We could implement the former using CI (though it'd mean spinning up a GL context for each layer requiring realtime filtering) and the latter using CA SPI (for now anyway). If this becomes an important use case, we can switch over to a more robust rendering backend (e.g. SpriteKit or Cocos2D) that would make it easier to do this kind of filtering regularly. Maybe let's tackle that in a different PR?Maybe let's tackle that in a different PR?
:+1:
Sorry for the wait everyone, life/work just keeps getting in the way :disappointed:
Went with this suggestion:
In the also-likely event that millions of context switches is also intractable, you could instead expose an API to access a JavaScript array of rgb data (a bit like Processing's pixels[])
Still very slow on the device :( Inverting a 400x400 image of Pusheen takes 11+ seconds on iPad Air: 7s to load the pixel array (most time spent creating the JSValue array from Cocoa/Swift array) 0.3s to run over the pixels and modify them in js Most of the remaining 4s is spent bridging the resulting value back to Cocoa/Swift-land
@spicyj I tried constructing a JSON string representation of that but it just kept crashing. I’m not suprised, though since it’s pretty ridiculous (an array of 160000 arrays of 3 ints).
I will still try 1. but I don’t expect any significant improvements. At this point I’m not sure if there’s any point in keeping this API if it runs so slow on-device. Maybe if it only lived in the Swift layer...? :-1:
@saniul When JSONifying, you could try sending each pixel down as a 32-bit (24-bit?) int and then convert it using pure JS to the nested arrays? Maybe that would be lower overhead. Too bad there's no JSC API for typed arrays.
(0.3s to run over the pixels is small relative to 11s but still pretty large relative to 16ms…)
Rough. Thanks a lot for trying this out, @saniul. No need to apologize for delays! :)
The performance characteristics of #1 should be pretty different: the only cost should be context switching, as opposed to all the copies here. It'll obviously be much worse than the 0.3s time here, but maybe it'll be OK?
I can imagine more aggressive ways for us to attack this (e.g. implementing a new JS "data" type which knows how to be backed by some buffer), but if approach #1 doesn't work we should probably surrender for now. :/
@spicyj sending ints over the “wire” and then breaking them down in the JS later didn’t really help.
@andymatuschak I implemented colorAt(pos)
and setColorAt(pos, r, g, b)
. Inverting 400x400 pixels takes ~6.3s on the same device.
Hm. One more idea: you'd have half as many context switches and would move the work of iteration to Swift if you made transform
take a function(x, y, r, g, b)
(deliberately x,y
here instead of a Point
to avoid tons of allocations)
(also: thanks for trying all this stuff @saniul!!!)
oh @andymatuschak pos is actually an Int (treating the data as an array). there are overloads with an additional parameter for row, column
getting/setting
Ah, gotcha.
On Apr 3, 2015, at 12:02 PM, Saniul Ahmed notifications@github.com wrote:
oh @andymatuschak https://github.com/andymatuschak pos is actually an Int (treating the data as an array). there are overloads with an additional parameter for row, column getting/setting
— Reply to this email directly or view it on GitHub https://github.com/Khan/Prototope/pull/57#issuecomment-89392469.
Also, no problem! I’m very happy to help.
It’s a bit awkward that
Pixel
s store color info in Int8 [0, 255] instead of Double [0.0, 1.0], but I provided a.color
getter and setter, which could be enough... Thoughts?PixelBitmap
has atransform
method, which constructs a newPixelBitmap
by applying the transformation on the original. The additional convenience overloads oftransform
provide information about the index (0..<width*height), or the row/column of the pixel. What do you think? Should this be simpler?