Currently, for each image frame, the metal file is run c+1 times, where c is the number of classes. Thus, processing the image once for each class, and one more time for all at classes together. Each time we produce a separate image.
If we were to make a custom Metal file that say, gives all these images (per-class and overall) in a single iteration, that would speed up our app much more reliably.
(This task runs for every single frame that is passed to the model.)
Currently, for each image frame, the metal file is run c+1 times, where c is the number of classes. Thus, processing the image once for each class, and one more time for all at classes together. Each time we produce a separate image. If we were to make a custom Metal file that say, gives all these images (per-class and overall) in a single iteration, that would speed up our app much more reliably. (This task runs for every single frame that is passed to the model.)
https://github.com/TaskarCenterAtUW/iOSPointMapper/blob/b93a57e211ea4e2f49269f466ca6e6a96948a867/IOSAccessAssessment/Segmentation/SegmentationViewController.swift#L98-L133