dceejay / tfjs-coco-ssd

Node-RED node for tensorflowjs coco ssd
Apache License 2.0
14 stars 10 forks source link

Cropping or JIMP image input? #7

Closed crxporter closed 4 years ago

crxporter commented 4 years ago

I'm using this flow for my home automation system - it's amazing. Very fast and just accurate enough.

I've run into a problem where some areas of my house are a bit far from the camera so I'm wanting to put a cropped image through the tfjs-coco-ssd node. I'm using image tools for cropping but it's quite slow to output a buffer object.

If I could either send a JIMP object into tfjs OR crop within tfjs, that would be really cool. Any thoughts?

Example: my full image has kitchen, dining room, and part of the living room. My cropped shows just the kitchen so I want to run tfjs only on that area.

dceejay commented 4 years ago

i'm not about to add jimp into the tfjs node when there is a perfectly good set of jimp nodes already. The whole point of Node-RED is that a node does it's thing - and tfjs is not for image manipulation... that should be done previously in the pipeline.

crxporter commented 4 years ago

Fair point.

The issue is that the output from jimp nodes is either a very fast jimp object or a very slow jpeg buffer object. The input for tfjs does not accept a jimp object so I'm having to make this slowdown happen while converting to jpeg buffer

The slowdown on pi 4 is about 2 seconds to make that jpeg buffer... I'm mostly looking for a good idea how to speed it up.

PS: I forgot to thank you for the awesome node. It is really fantastic. Great work.

ristomatti commented 4 years ago

@crxporter Please post an update if you figure out a solution. I'm also here on my second day setting up a flow that uses image tools to resize first. I'm doing this on a Jetson Nano and with the GPU acceleration (tediously hacked working), I'm seeing tfjs-coco-ssd node takes around 350-400ms to process the image which is 100-300ms less than resizing the incoming image.

But I agree, an awesome node!

crxporter commented 4 years ago

No changes or updates on this. Jimp is still the slow link when outputting a jpg instead of a jimp object.

Closing for now, I'll be back if anything changes or if I have new ideas. Thanks again!

ristomatti commented 4 years ago

@crxporter You can use for example NPM library sharp for faster image manipulation. It uses native dependencies so it's not as flexible as JIMP but works fine on my Raspberry Pi 4 for example.

Just install it to your .node-red dir and add it to your global context in settings.js. Function node example to resize an image:

const sharp = global.get('sharp');

const resized = await sharp(msg.payload)
  .resize(416)
  .toBuffer();

msg.payload = resized;

return msg;

I was also able to do the resizing within a pool of worker threads but it's a bit more involved so not going there unless asked.

dceejay commented 4 years ago

The tfjs library does not accept jimp images - https://js.tensorflow.org/api_node/1.2.7/#node.decodeImage - so at some point or other you will need to convert if you want to use tfjs... so no need to do it in this node.it won't be any faster.

crxporter commented 4 years ago

@dceejay this makes sense. I don't necessarily need to use jimp, just need a way to crop the images going into tfjs.

The jimp crop node is easy but not necessarily the fastest. I just want the quickest way to run tfjs on a cropped portion of the input image.

dceejay commented 4 years ago

so where are the images coming from ? does that produce jimp or anything useful ?

crxporter commented 4 years ago

Images are jpeg links on my lan.

Currently I have jimp loading an image from https://(local jpeg). Jimp crops and sends the new jpeg image into tfjs node.

dahlheim2 commented 2 years ago

i've been using imagemagick convert -crop on my rpi4 for this reason and it seems to be quite fast. i use a shell script to run it, output to a file and point the tfjs node to the file via path/name. not sure if that helps but it's working quickly here.

ristomatti commented 2 years ago

@dahlheim2 I highly recommend trying out Sharp as suggested by me earlier on this thread. It is what Next.js framework uses for it's image optimization for example.

Some benchmarks found via a quick search:

dahlheim2 commented 2 years ago

thanks for the lead! looks promising.

unfortunately, this old Unix sysadmin from the 1980's will need to figure out your description of "add it to your global context in settings.js". not sure exactly what to do, but will continue to search for leads...

ristomatti commented 2 years ago

@dahlheim2 It's actually much easier nowadays. Just add the library to a function node. Something like this:

image

image

P.S. Sorry @dceejay for creating an adhoc forum from this old issue thread! i just had this still on my GitHub watch list and could not resist. :grin:

dahlheim2 commented 2 years ago

thank you VERY much. i look forward to implementing this. thanks again also for the node itself, it has been great fun and very easy to implement and play with. it has made my home automation system 80% more effective in terms of motion recognition identification.

also sorry if i've turned this into an undesired forum thread. if it's any consolation, you've helped out one guy a bunch!

ristomatti commented 2 years ago

@dahlheim2 It's worth noting that the library uses native binding to libvips and depending on the situation might require that build-essentials or such OS package is installed. It's more than likely though it will just download a precompiled binary from npm.