Open grrrwaaa opened 8 years ago
Hey Graham -- It's great that you post this here. Any/all feedback along the way here is most appreciated. Particularly with some of the Jitter stuff I feel at times like I'm trying to find my way through the dark...
For this particular case, is there any possibility of prototyping what the input/output would be like with a Max abstraction?
I don't know how I'd do it with just regular max objects. I think the best thing to do is look at how the cv.jit objects are doing it -- any of the blob tracking externals would be fine, they all produce varying-size matrices. I presume he already figured out the most efficient/stable way to do this.
(aside: I find the jitter sdk really confusing TBH. If there's a way to avoid having to create a separate definition for the object vs. the max wrapper that would be awesome, for example.)
Okay, well now I follow my own advice and look at the cv.jit.blobs.centroids code, I see that simply in the init there's a call to jit_mop_output_nolink to stop output adapting to input, and in the calc there's a call to reset the matrixinfo (which I presume means it is being reallocated on every frame). So no magic here, and probably you can ignore this comment.
Hope it's ok to post this here. This is one thing I've needed for a while, and even with access to the max repo it's hard to know how to do this properly. E.g. lot of computer vision type algorithms output varying size matrices. cv.jit is full of them.
A simple example would be simply a square matrix of random dim on every bang.