Closed jordanhart closed 7 years ago
There is no such function but you can modify printChannelsForPixel() to return an array instead.
However, if you want to do this for all the pixels in the image, it's better to load the MPSImage into an array (with toFloatArray()
) and then use Accelerate functions to transpose everything. (Or wait until iOS 11, where MPSImage has a readBytes
function that puts all the channels together.)
Thank you so much for all your help! Really appreciate it!
Would you mind if I ask for clarification of what the format is after transposing the MPSimage and using the toFloatArray(). Currently ended up using the offset function in the YOLO example.
The context of what I am doing, if it helps, is trying to port the convdet function of squeezedet to this iOS example, defined as "_add_interpretation_graph" here: https://github.com/BichenWuUCB/squeezeDet/blob/master/src/nn_skeleton.py#L142
toFloatArray() loads the data from the MPSImage slice-by-slice. The source code has a more detailed explanation. On iOS 11 you can use the MPSImage.readBytes() function to get the data in a more sensible format, such as height x width x channels.
Hi!
Quick question. For the post-processing, is there a version of printChannelsForPixel() that just returns all the channels at the point for an MPSImage instead of printing them? Or to use the output MPSImage do I need to just make wrapper functions for slicing and indexing?
Thank you so much!