Essentially, the way this would work is that we have a canvas you can draw on, using different colors for different priorities / areas. When you're done, we save that image somewhere for later re-display, then automatically generate an area that encompasses the entire drawn segment for actual use cropping.
I suppose alternately we could do some kind of actual manipulation of the area detection algorithm to accept non-box areas, but I don't think we would actually get much better results from that, and it would be harder to implement. However, this is always a route we could choose to go down at a later date, especially if we save the drawn images.
Essentially, the way this would work is that we have a canvas you can draw on, using different colors for different priorities / areas. When you're done, we save that image somewhere for later re-display, then automatically generate an area that encompasses the entire drawn segment for actual use cropping.
I suppose alternately we could do some kind of actual manipulation of the area detection algorithm to accept non-box areas, but I don't think we would actually get much better results from that, and it would be harder to implement. However, this is always a route we could choose to go down at a later date, especially if we save the drawn images.