neilenns / node-deepstackai-trigger

Detects motion using Deepstack AI and calls registered triggers based on trigger rules.
MIT License
165 stars 28 forks source link

How to acquire an image from an API for processing #160

Closed Viksie closed 4 years ago

Viksie commented 4 years ago

With Nx Witness a "thumbnail" image can be acquired via the following HTTP request to the VMS:

http://IP:Port/ec2/cameraThumbnail?cameraId=00011-1834-2342-2342-23421356

How would one though go about to save the image to a folder or have it accepted directly by the AI-Trigger-Tool ?

neilenns commented 4 years ago

I'm not familiar with the Nx Witness system so I downloaded the trial to look around. I couldn't ever get it to authenticate with my cameras, but did explore the settings.

Unfortunately I didn't see an option anywhere to export an image when motion is detected on a camera. There were lots of other options for things to do on motion but not that. Unfortunately it looks like it won't work with this system.

Viksie commented 4 years ago

That's interesting, as from my experience, Nx Witness autodetects all-sorts of cameras quite easily if they are all on the same subnet and the default password had not yet been changed.

To advise Nx Witness of any password changes on cameras, you'll notice an "Edit Credentials" ... button ... at the bottom of the General tab once navigated to it by right clicking on the camera > Camera Settings > General.

I did find Nx a bit "hindered" on settings, until I played a bit more with it. That said, back to hopefully getting an image for analysis.

Nx Witness's REST API can be referenced via http://:7001/static/api.xml#

"Event Rules" is created/edited by either navigating through System Administration > Event Rules > +Add or by right clicking on a camera and selecting Camera Rules...

In relation to Event "Motion on Camera", except for sending an email which will have a snapshot from the camera(s) included, the only other viable "Action" seems to be the "Do HTTP request" action with API call http://:7001/ec2/cameraThumbnail?cameraId=017d5-020c-69-8ccddc

** the cameraId can be queried via yet another API call or also acquired by right clicking on a camera > Camera Settings > General tab and clicking on the dropdown "More Info" to reveal that camera's Id.

The burning question is thus if it's somehow possible to write the image presented in an HTTP call directly to file, via the call itself ?

Or could ai-trigger accept an HTTP call which then triggers it to acquire the image from Nx Witness? (ai-trigger then parsing the image from the HTTP call)

neilenns commented 4 years ago

Yeah, I couldn't get it to connect at all. I tried many times with the username/password and it just wouldn't work. Don't know why, they are normal Hikvision cameras.

The trigger system would have to accept incoming requests, take the image, save it temporarily locally, then run it through all the detection logic. Honestly I'm unlikely to ever take the time to include that capability in the system: it's an added layer of setup complexity for users, and exposing inbound ports on Docker containers is an endless stream of support issues when it doesn't work. Plus you are the only person I've heard of who uses Nx Witness :) This really was geared towards BlueIris users (the original project name was node-blueiris-trigger!).

Viksie commented 4 years ago

If only BlueIris would've been available to run on Linux

Viksie commented 4 years ago

In the interim, I await Nx Witness to allow provide the feature to extract an image to the local file system.

Do you run your BI in a VM and even if not, what resources do you allocate to it ?

neilenns commented 4 years ago

I run BlueIris directly on Windows on a spare PC I had.

Viksie commented 4 years ago

Neil,

I got working NodeJS code which fetches 5 images upon being triggered by an HTTP call. In this application the images can then be placed within the node-deepstackai-trigger input folder for analyses and alerting thereon.

The module be called "fetch" for a lack of a more creative name.

With DeepStack defaulting to port 5000, I figured port 4999 would be a good starting point, as this trigger is required prior to any image analysis. By exporting port 4999 in the docker container, it'll be available to any external call as to acquire image for analysis.

Though it's only 33 lines of code, it's been 8 days of relentless noob chiseling ;) It needs the following to be smoothed down:

  1. Call Nx server and save each camera as an object.(?) The only reason for this is to map between the cameraName and cameraId as the cameraId is a requirement to make the API call as to acquire the still images. And then also, users would surely prefer referring to the cameraName rather than then CameraID. i.e. Driveway- and not AE1258-5234...12348-

  2. Acquire the cameraId from the initiating HTTP call, as to know which camera is to be worked with

  3. Config to include setting:

    • nx_User = test
    • nx_UserPass = test
    • nx_Server = 192.168.0.10 or FQDN
    • nx_ServerPort = 7001

The HTTP call to fetch.js being:

http://:4999/ cameraId=AE1234-0000-4321 or cameraName=Driveway

The HTTP calls to Nx Server being:

To get json data to map between cameraId and cameraName

http://nx_User:nx_UserPass@nx_Server:nx_ServerPort/ec2/getCamerasEx

To get still images:

http://nx_User:nx_UserPass@nx_Server:nx_ServerPort/ec2/cameraThumbnail?cameraId=AE0000-0000-0000-0000-800cdcd&time=now


It's noob code and sanitation is surely required. Would you though accept it being integrated with node-deepstackai-trigger ?

For your review:

const   http = require('http'),
        Stream = require('stream').Transform,
        fs = require ('fs'),
//        sleep = require ('sleep-promise'),

        server = http.createServer((req, res) => {
            if (req.url === '/') {
                res.write('Alert received...');
                res.end();          
                console.log('Motion alert : ');
//             Needs to be assembled via variables            
                var url = 'http://test:test@<ServerIP>:7001/ec2/cameraThumbnail?cameraId=017274d5-18fe-020c-65d9-8ccdb4b13ddc&time=now';                    

                for (let index = 0; index < 5; index++) {

                    http.request(url, (response) => {                                        
                        var data = new Stream();
                        var folder = './images/';
                        var cameraName = 'Driveway';
                        var thumbNail = folder + cameraName + '-' + new Date().getHours() + "_" + new Date().getMinutes() + '.' + new Date().getSeconds() + '.' + new Date().getMilliseconds() + '.jpg'
                        response.on('data', (chunk) => {                                       
                            data.push(chunk);
                        });                                                                         
                        response.on('end', () => {
     //                     write to variable - /aiinput folder
                            fs.writeFile(thumbNail, data.read(),(err)=>{
                                if (err)
                                    console.log(`Image ${thumbNail} error : ${err}`);
                                else
                                    console.log(`Image ${thumbNail} acquired`);
                            });
                        });
                    }).end();
                };
            };
        });
    server.listen(4999);
    console.log('Listening on port 4999...');
neilenns commented 4 years ago

Hey cool! It doesn't matter that it took 8 days, you hacked at it and proved the concept is possible. Nice job!

While you were hacking I added a feature to annotate images with the predictions received. It may not seem related but it required me to add:

All of those are directly useful to what you write above.

What I'll do is open a new issue with the feature request to support Nx Witness-style image acquisition and then write some less noob code to handle it. I'm on vacation for a bit so it'll be a nice side project.

neilenns commented 4 years ago

260 tracks the feature request. With the other changes I've made in the last week it should be pretty easy and can be more general purpose than just Nx Witness. Once I get this big settings change done I'll look into it. Stay tuned!

Viksie commented 4 years ago

Sweet !

neilenns commented 4 years ago

I made quite a bit of progress, it's basically working. I'll have a tagged Docker image for you to try later today so we can have our timezone difference work in our favour and you can play with it while I'm asleep.

To get a head start you should update to the latest build and make your settings.json file and confirm that's working like your current setup is.