Closed multierection closed 4 years ago
Use the superpixel versions of the network (which are slower), create a binary image from the output of the network where the regions where the superpixels are Fire are > 0, and everywhere else is 0 (use logical operations, will be most efficient) - then use boundingRect() from OpenCV on this image to get the rectangle coordinates of the fire.
Alternatively just take the min/max x and y of the set of superpixels returned as fire.
If I want to annotate the fire position and get the coordinates from the video using FireNet or InceptionV1-OnFire, what should I do? Thanks !