ShaharMS / Vision

Cross framework, cross platform computer vision for Haxe
MIT License
35 stars 5 forks source link

added experimental support for drawing on Vision with my cornerContour library. #5

Closed nanjizal closed 1 year ago

nanjizal commented 1 year ago

added initial support for drawing on Vision with cornerContour. ( untested but based on already working code for other work ). https://github.com/nanjizal/cornerContour/blob/main/src/cornerContour/drawTarget/VisionDraw.hx

A test may look a bit like this

package cornerContourSamples.hxVision;

import vision.ds.Image;
import cornerContour.drawTarget.VisionDraw;
import cornerContourSamples.svg.All;
import cornerContour.Pen2D;

function main() new SvgExample();
class SvgExample  {
    var g:  vision.ds.Image; 
    public function new() {
        g  = new Image( 1024, 1024 );
        var pen = new Pen2D( 0xFF0000F );
        svgs( pen );                       // svg.all
        rearrangeDrawData( pen, g ); // from VisionDraw
        // image apply filters
        // draw to target 
    }
}

Suggestions on a good demo setup with filter applied?

I am unsure if triangle implementation can be improved, but in theory it should work.

ShaharMS commented 1 year ago

I don't know if that is the way I imagined this to work. Ideally, someone would use cornerContour's drawing capabilities, and only then try and apply the cv/Im stuff (with vision).

It might still be better to just have an implementation in vision itself that allows casting from/to cornerContour's image data types, because that would make things a bit nicer to work with.

nanjizal commented 1 year ago

I draw direct to shader in most instances so for instance with WebGL you can grab the pixels from the drawing context and pass them to Vision image the use case above is more for drawing on targets that don't support triangle shading see https://github.com/nanjizal/cornerContourSamples/blob/main/src/cornerContourSamples/hxWXhaxeui/SvgExample.hx

nanjizal commented 1 year ago

or like printing

nanjizal commented 1 year ago

So I don't provide an image... so on webGL I convert the array of triangle information to typed float to use directly in the shader, similar with kha, lime, nme, with openfl and canvas draw commands for triangles, on ceramic its a mesh, and similar for Iron3D, then on heaps I populate triangle vectors.

nanjizal commented 1 year ago

So can draw triangle to pixels, pass triangle data to shader, or to array of triangles / mesh, or to draw commands. I would imagine with Vision to either use the pixels which needs this new code, or if using a toolkit they can draw to renderCanvas that is not onscreen and then havest data to vision.

nanjizal commented 1 year ago

I think after cornerContour is rendered with a shader in a toolkit it's beyond my library to care if Vision is used... but for using directly to a vision Image is within scope ?

nanjizal commented 1 year ago

and being able to have filter within cornerContour setup to give something usefull..

nanjizal commented 1 year ago

So for instance drawing with html canvas is more limited in triangle corner colour control https://github.com/nanjizal/cornerContourSamples/blob/main/src/cornerContourSamples/hxCanvas/SvgExample.hx So you could grab the canvas pixels and Vision them

nanjizal commented 1 year ago

To have 3 color corner triangle drawing with Vision is hard... heavy, viable to implement but likely extremely slow..

nanjizal commented 1 year ago

I presume you could implement some filters on a shader directly as well and would be lighter, but realise outside your scope currently

ShaharMS commented 1 year ago

A PR would be optimal here, but you also got me thinking if I'd rather have a byte array as the image, turn it into a class and just make it a giant abstraction over that byte areay

nanjizal commented 1 year ago

hxPixels used a byte array I think maybe talk with author he is into algorithms