Closed gbrunner closed 4 years ago
Hi @gbrunner, thanks for your suggestions.
Batch support was done through vectorize
method in TemplateBaseDetector class.
def vectorize(self, **pixelBlocks):
input_image = pixelBlocks['raster_pixels']
_, height, width = input_image.shape
batch, batch_height, batch_width = \
prf_utils.tile_to_batch(input_image,
self.json_info['ImageHeight'],
self.json_info['ImageWidth'],
self.padding,
fixed_tile_size=False)
batch_bounding_boxes, batch_scores, batch_classes = self.inference(batch, **self.scalars)
return prf_utils.batch_detection_results_to_tile_results(
batch_bounding_boxes,
batch_scores,
batch_classes,
self.json_info['ImageHeight'],
self.json_info['ImageWidth'],
self.padding,
batch_width
)
ObjectDetectionAPI's ChildObjectDetector
class inherited it, the input of the inference
method is already a batch image array, which is a 4-d numpy array with shape [batch, image_height, image_width, bands].
Therefore, as I can see there is no need for any code modification at present. Thanks again for your time in reporting this issue, please let me know if you have any other suggestions.
@lingtangraster @Rob-Fletcher @dwilson1988 I was finding that the TensorFlow/ObjectDetectionAPI.py wasn't running batches when I I defined a _batchsize. I implemented that in the Python class and it appears to work and run really fast, about a minute to detect ~900 objects in Pro using a _batchsize = 100:
Let me know if this is helpful or completely off base.