Closed franva closed 2 years ago
@franva Thank you for your feedback and sorry for late reply! I didn't suppose image processing in Matft but what Matft can connect to UIImage may be helpful. (Also complex supporting(#24)).
I found opencv's conversion codes, and referred to it. Finally, I could convert UIImage into MfArray (opposite conversion too).
The code is below;
@IBOutlet weak var originalImageView: UIImageView!
@IBOutlet weak var processedImageView: UIImageView!
func reverse(){
let size = CFDataGetLength(self.processedImageView.image!.cgImage!.dataProvider!.data)
let width = Int(self.processedImageView.image!.size.width)
let height = Int(self.processedImageView.image!.size.height)
var arr = Matft.nums(Float.zero, shape: [height, width, 4])
var dst = Array<UInt8>(repeating: UInt8.zero, count: arr.size)
// UIImage to MfArray
arr.withDataUnsafeMBPtrT(datatype: Float.self){
let srcptr = CFDataGetBytePtr(self.processedImageView.image?.cgImage?.dataProvider?.data)!
// unit8 to float
vDSP_vfltu8(srcptr, vDSP_Stride(1), $0.baseAddress!, vDSP_Stride(1), vDSP_Length(size))
}
// reverse
arr = arr[~<<-1]
arr = arr.conv_order(mforder: .Row)
// MfArray to UIImage
arr.withDataUnsafeMBPtrT(datatype: Float.self){
srcptr in
dst.withUnsafeMutableBufferPointer{
// float to unit8
vDSP_vfixu8(srcptr.baseAddress!, vDSP_Stride(1), $0.baseAddress!, vDSP_Stride(1), vDSP_Length(arr.size))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue | CGImageByteOrderInfo.orderDefault.rawValue)
let provider = CGDataProvider(data: CFDataCreate(kCFAllocatorDefault, $0.baseAddress!, arr.size))
let cgimage = CGImage(width: arr.shape[1], height: arr.shape[0], bitsPerComponent: 8*1, bitsPerPixel: 8*arr.shape[2], bytesPerRow: arr.shape[1]*arr.shape[2], space: colorSpace, bitmapInfo: bitmapInfo, provider: provider!, decode: nil, shouldInterpolate: false, intent: CGColorRenderingIntent.defaultIntent)
self.processedImageView.image = UIImage(cgImage: cgimage!)
}
}
}
The output is like this;
Whole code is here.
Matft doesn't support full fancy indexing, so perfect image processing may not be done. but I hope this helps you.
The above code has some problems.
arr.withDataUnsafeMBPtrT(datatype: Float.self)
The second point may be solved by the next protocol version(Readme was not updated).
Hi @franva
I've just implemented conversion functions (Matft.image.cgimage2mfarray
and Matft.image.mfarray2cgimage
)
So, you can process image as mfarray!
Examples are below;
@IBOutlet weak var originalImageView: UIImageView!
@IBOutlet weak var reverseImageView: UIImageView!
@IBOutlet weak var swapImageView: UIImageView!
func reverse(){
var image = Matft.image.cgimage2mfarray(self.reverseImageView.image!.cgImage!)
// reverse
image = image[Matft.reverse] // same as image[~<<-1]
self.reverseImageView.image = UIImage(cgImage: Matft.image.mfarray2cgimage(image))
}
func swapchannel(){
var image = Matft.image.cgimage2mfarray(self.swapImageView.image!.cgImage!)
// swap channel
image = image[Matft.all, Matft.all, MfArray([1,0,2,3])] // same as image[0~<, 0~<, MfArray([1,0,2,3])]
self.swapImageView.image = UIImage(cgImage: Matft.image.mfarray2cgimage(image))
}
Please refer to the example here.
Hi jjjkkkjjj,
Great work on bring Numpy to Swift!!!
I am learning how to do inference within CoreML using Swift.
So far I have gotten the UIImage from an image picker and I need to do preprocessing e.g. resize, transpose, normalize(mean=(0,0,0), std=(1,1,1))
And after hours and hours searching, Swift just proofed that it is not a language which is friendly for image processing. And I found your Repo here which has all the amazing feature I need.
So I think it is very helpful if you could add a demo for this.
Cheers