Open wanqingrongruo opened 2 years ago
we should not handle mask/effect by this way:
func renderToImage(bounds: Rect, inset: Double = 0, coloringMode: ColoringMode = .rgb) -> MImage? { let screenScale: CGFloat = MMainScreen()?.mScale ?? 1.0 MGraphicsBeginImageContextWithOptions(CGSize(width: bounds.w + inset, height: bounds.h + inset), false, screenScale) let tempContext = MGraphicsGetCurrentContext()! // flip y-axis and leave space for the blur tempContext.translateBy(x: CGFloat(inset / 2 - bounds.x), y: CGFloat(bounds.h + inset / 2 + bounds.y)) tempContext.scaleBy(x: 1, y: -1) directRender(in: tempContext, force: false, opacity: 1.0, coloringMode: coloringMode) let img = MGraphicsGetImageFromCurrentImageContext() MGraphicsEndImageContext() return img }
when we have many effects,mask, us operations would get stuck... Image processing is very consumptive performance.
we should not handle mask/effect by this way:
when we have many effects,mask, us operations would get stuck... Image processing is very consumptive performance.