samtap / fang-hacks

Collection of modifications for the XiaoFang WiFi Camera
1.67k stars 340 forks source link

single JPG or PNG pictures #87

Open mhdawson opened 7 years ago

mhdawson commented 7 years ago

I've read a few comments about people wanting to have the camera capture single JPG's or PNGs and push then to a remote server which I'd like as well.

I think what we'd want is this project: https://github.com/fsphil/fswebcam. I've used it on a raspberry pi and it does the capture that we want. It supports V4L or V4L2 compatible devices and from reading through the threads it seems XiaoFang WiFi Camera is V4L2 so I thought I'd give porting it over a try.

I have managed to manually hack the build scripts and the dependency libgd (and libfreetype2, libpng, libjpeg, libz which libgd requite) to build using the arm cross compiler on ubuntu. I now have an executable which runs on the camera (build, scp to camera, run though ssh shell running on camera).

Unfortunately I can't get it working yet. It gets as far as wanting to take the picture:

/media/mmcblk0p2/data/bin # ./fswebcam5 --verbose --device /dev/video0
main,1609: gd has no fontconfig support
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
src_v4l2_get_capability,87: /dev/video0 information:
src_v4l2_get_capability,88: cap.driver: "snx_isp"
src_v4l2_get_capability,89: cap.card: "isp Camera"
src_v4l2_get_capability,90: cap.bus_info: ""
src_v4l2_get_capability,91: cap.capabilities=0x04000001
src_v4l2_get_capability,92: - VIDEO_CAPTURE
src_v4l2_get_capability,103: - STREAMING
No input was specified, using the first.
src_v4l2_set_input,181: /dev/video0: Input 0 information:
src_v4l2_set_input,182: name = "Camera"
src_v4l2_set_input,183: type = 00000002
src_v4l2_set_input,185: - CAMERA
src_v4l2_set_input,186: audioset = 00000000
src_v4l2_set_input,187: tuner = 00000000
src_v4l2_set_input,188: status = 00000000
src_v4l2_set_pix_format,541: Device offers the following V4L2 pixel formats:
src_v4l2_set_pix_format,554: 0: [0x30323453] 'S420' (S420)
src_v4l2_set_pix_format,554: 1: [0x30314742] 'BG10' (SBGGR10)
src_v4l2_set_pix_format,554: 2: [0x31384142] 'BA81' (SBGGR8)
Using palette BAYER
src_v4l2_set_mmap,693: mmap information:
src_v4l2_set_mmap,694: frames=4
--- Capturing frame...
VIDIOC_DQBUF: Inappropriate ioctl for device
No frames captured.

NOTE: before you can get this far you'll have to disable the RTSP server through script management option in the hacks UI.

The docs for V4L2 seem to say that cameras with V4L2 support should handle one of the following:

Read/Write() requires that the device report the V4L2_CAP_READWRITE capability, which is does not, and my experiments to read even 1 byte from the device with Read always returned -1

You are supposed to be able to tell which of the streaming methods are supported by requesting buffers using VIDIOC_REQBUFS corresponding to the streaming type and it should only succeed if the method is supported. Requesting buffers for the DMA buffer type fails with 'Invalid argument'

Requests for buffers for Memory Mapping and User Pointers work but later calls to enqueue these buffers to be used for capture seems to fail. When I try to query the buffers with VIDIOC_QUERYBUF to configure for memory map, or enqueue with VIDIOC_QBUF the ioctl just fails telling me the ioctl is inappropriate for the device.

So I'm stuck because it looks like none of the methods for getting the picture from the camera are working. On the other hand I know its possible since the RTSP server does it.

My question is if anybody who has worked with the RTSP server know which method it uses to get picture data from the camera, and if so can they point me to the code in the RTSP server that implements that part. That might help me figure out what I'm doing wrong.

samtap commented 7 years ago

I ran into the same problems, for example with v4l2rtspserver and ffmpeg builds that are included in the test folder. I didn't get any of the V4L2 features to work.

If you take a look at the original rtspserver sources, you'll soon notice lots of snx_* calls everywhere. Some are just wrappers around a couple of default V4L2 ioctl calls, others do all sorts of lowlevel stuff. It seems the drivers/api is loosely based on V4L2 interfaces, but they're not really compliant.

mhdawson commented 7 years ago

Where is the source for the ported rtspserver? If I can look at that maybe I can see how they are pulling out the picture and do the same thing.

mhdawson commented 7 years ago

Maybe this is the source ? https://github.com/haoweilo/RTSP_stream_server

mhdawson commented 7 years ago

This link seems to have some more specific info around the interface: https://github.com/fritz-smh/yi-hack/issues/118

mhdawson commented 7 years ago

Found this in the SDK which might do the trick package\app\example.tgz\example.tar\example\src\ipc_func\snapshot\ based on the description of what it does. Going to try to see if I can build it with the SDK.

mhdawson commented 7 years ago

Ok have it working. You need to specify -m and can there are a number of other options. For example:

snx_snapshot -m -q 10 -n 1

is jpeg quality 10 and it will take 1 picture when requested.

By default it waits for you to touch /tmp/snaptshot_en. When you do that it then grabs the requested picture(s) writes then to /tmp and then add the name of the snapshot to /tmp/snaplist.txt

mhdawson commented 7 years ago

What I really want is it listening on mqtt for the request and then sending the picture off to a remote server, but this is a good start as it is the key component of capturing a single jpeg on request. I might be able to pull the code from the example and integrate with fswebcam, but I don't see the point as the example pretty much does what I want as is in terms of taking a picture.

mhdawson commented 7 years ago

In case anybody else wants to build it, these are the steps that I ended up having to do :

cp   /home/user1/SN986/snx_sdk/middleware/video/middleware/lib/*  /home/user1/SN986/snx_sdk/middleware/_install/lib
cp /home/user1/SN986/snx_sdk/middleware/rate_ctl/middleware/lib/* /home/user1/SN986/snx_sdk/middleware/_install/lib
samtap commented 7 years ago

Great job, but does it work while snx_rtsp_server is running?

mhdawson commented 7 years ago

No I had to disable snx_rtsp_server as only one of the 2 can have the device open at the same time. We might be able to combine the code to allow a snapshot mid stream but I'm guessing that won't be trivial.

I'm going to do some scripting work, possibly try to compile in an mqtt client to handle the external request/response flow. Even if you have to switch between rtsp and snapshots I'm thinking it should be useful. In my use case I want to start sending pictures at 5 second intervals after an alarm is triggered as opposed to sending a video stream.

mhdawson commented 7 years ago

Simple script along these lines can trigger capture and scp to target: (do know that binaries etc should go somewhere else than .../data/bin, but that is where I'm experimenting)

export HOME=/media/mmcblk0p2/data/bin
rm /tmp/snaplist.txt
touch /tmp/snapshot_en
while  [  ! -f /tmp/snaplist.txt ]
  do
     usleep 100000
  done
NEWFILE=`cat /tmp/snaplist.txt`
mv $NEWFILE $1

scp -i ./bkey.txt -P 20022 $1 ubuntu@XXXXXX:pictures/$1
rm $1
mhdawson commented 7 years ago

Have it working with https://github.com/mhdawson/PIWebcamServer.git (SN986 branch) to allow request for picture to be made through mqtt, when the request comes in PIWC triggers "takepicture.sh". In this case I replaced the original content of takepicture.sh with what I showed in the in the last post (with xxxxx replaced with the proper host of course).

Something similar could be done through a post to the webserver which is part of the hack, but mqtt is nice because it reaches out to the server and as opposed to you having to allow an incoming http request through your router.

@samtap, I'm probably at the point were I should do a bit of cleanup and then I'd be interested to know if you think support for snx_snapshot can/should be added to the base hacks and if so in what form.

roger- commented 7 years ago

FYI you should be able to use v4l2copy and v4l2loopback to allow multiple applications to read from a video device.

samtap commented 7 years ago

@roger- I don't think so, I've looked into the source (briefly) and like all v4l2 code it uses raw read/write/ioctl on the device. Like the ffmpeg or v4l2rtspserver builds (in the test folder) I can't get them to work (SNX requires the middleware api they provide in snx_vc, snx_isp etc). However I'm looking into modifying v4l2rtspwrapper (used by v4l2copy, v4l2rtspserver). If it is possible to create a SNX-version of V4l2Device class, this stuff would likely work as-is and we'd have a pretty elegant method of sharing the device for different things like streaming, recording and even running iCamera.

roger- commented 7 years ago

Ah, you might be right. I thought the SDK docs said it used the standard V4L2 interface but maybe I was mistaken.

v4l2wrapper looks like it was factored out of an old version of v4l2rtspserver (which the snx server is based on), so hopefully it won't be too hard :)

thanme commented 7 years ago

Can't you just take a single image using ffmpeg on the RTSP stream??

mhdawson commented 7 years ago

I tried that out but it means you have to have the RTSP stream running all the time, it seemed bit flaky. From my experience it did not work nearly as nicely as what I have now with snx_snapshot and the mqtt request.

RiRomain commented 7 years ago

So, finally I got a JPG stream working!

I modified the snx_snapshot example so that it capture an image every second, and also so that it always save the latest image with the same name in the /tmp/www folder. The latest jpg can than be accessed at http://$CAM_IP/snapshot.jpg

So for a quick how to:

  1. Download snx_snapshot here: https://drive.google.com/file/d/0BwhTA0oE8QeXZTU5bGFrWkZXcXc/view?usp=sharing
  2. Copy the file snx_snapshot on your cam into /media/mmcblk0p2/data/usr/bin/
  3. Add execution right to snx_snapshot "chmod +x /media/mmcblk0p2/data/usr/bin/snx_snapshot"
  4. In /media/mmcblk0p2/data/etc/scripts/20-rtsp-server replace the line: "snx_rtsp_server -W 1920 -H 1080 -Q 10 -b 4096 -a >$LOG 2>&1 &" with this one: with "snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 > $LOG 2>&1 &"
  5. and reboot your webcam (reboot now)
  6. You can now access the image at http://$CAM_IP/snapshot.jpg

You can modify the option in the script to fit your need: -W Capture Width (Default is 1280) -H Capture Height (Default is 720) -q JPEG QP (Default is 60)

darethehair commented 7 years ago

I am impressed by what has been accomplished here! :)

My question is this, however: cannot this customization and creation of binaries be extended to also compile a version of 'mjpg_streamer' for this device? That would allow easy use of both snapshots and video streaming to web pages -- which I currently do with a variety of cams on my server. I have found compiling 'mjpg_streamer' to be easy in the past.

For example, this thread contains instructions for getting/compiling 'mjpg-streamer' on Raspberry Pi (and also my C.H.I.P. computer):

https://bbs.nextthing.co/t/mjpeg-streamer-with-compatible-usb-webcam/6505

RiRomain commented 7 years ago

I think the problem would not be to compile this, but to be able to run it without making the camera over-heat. My guess is, the camera is not powerful enough to directly run a mjpg-streamer.

darethehair commented 7 years ago

I had not considered the possibility that 'mjpg-streamer' could take more resources than the RTSP one. Hmmm...

samtap commented 7 years ago

MJPEG is an outdated format, however I have a new rtsp server that allows snapshots to jpg. It will be released soon(ish)!

RiRomain commented 7 years ago

I don't know enough to debate which is best but one thing is sure, mjpeg stream are better supported on client side. Home-assistant (written in Python) cannot handle rtsp well enough: https://community.home-assistant.io/t/rtsp-stream-support-for-camera/586/81

An integration in the rtsp server of a jpg snapshots function is a great news :-)

darethehair commented 7 years ago

Having an RTSP server that can also provide JPG snapshots would be great! However, it would still be nice to have a way to provide a video stream to web pages :) BTW, I did some Googling the other day and got the impression that 'mjpg-streamer' works fine on devices (e.g. routers) that use the same chip as the Xiaomi -- so I think that it would be powerful enough if that were an option...

samtap commented 7 years ago

I agree we need a good way to stream for web. As far as I know you can only play RTSP with a flash player, instead of mjpeg we could use HLS or WebRTC.

The device can manage the H264 frames efficiently by using it's dedicated hardware, so to package in various formats i.e. RTSP, HLS chunks or write to files on sd-card etc is relatively easy/cheap. Encoding in a different format (converting H264 frames to JPG) using software is very expensive (ffmpeg can do it but it takes ~15 seconds to make a single jpg frame). Mjpg-streamer could work to package the JPG frames and provide a stream, but it would require the hardware assisted MJPG encoding (so no simultaneous H264 stream possible) to grab them.

Freshhat commented 7 years ago

Hey guys, really nice work with the snapshot function. But is there any way tom implement also a rotation feature? Because i need this function when the camera is mounted on the ceiling.

RiRomain commented 7 years ago

Hi @Freshhat , I guess you'd better try to do that in your client. Which client are you using to view your jpg? Of course it's also possible with the snapshot function, but it need to be implement and I know next to nothing about how to do that... and not sure someone else with the knowledge will invest the time to implement this function.

halfluck commented 7 years ago

Great work everyone, I've been hanging out for a jpeg snapshot for use with HA for quite sometime

Orbit4l commented 7 years ago

@RiRomain Great work on snx_snapshot. Is there any way to change the frequency of taking a snapshot from 1 sec to any value (having it as a new option)?

fubar2 commented 7 years ago

Thanks to advice above from @RiRomain and @mhdawson I finally have a time lapse solution working for me after experimenting with all sorts of ways to capture time lapse from the rtsp stream - which gave very disappointing image quality - lots and lots of broken frames.

I found using the @RiRomain above version of snx_snapshot did not work well for me using wget to grab a snapshot.jpg from the web server because the files I captured were often invalid. I suspect that the snx_snapshot process was probably rewriting them as they were being copied....

Long story short, the original SDK version of snx_snapshot which looks for /tmp/snapshot_en being touched before taking a snapshot and writes the time/date stamp into the jpg name enabled me to run the following sh script. It just copies the latest snapshot to a nearby server cifs directory on a regular basis using smbclient (also in the SDK). I tried scp and scp-openssh but could not (easily) get it to authenticate.... Note that some shell trickery is needed to run smbclient automagically. I resorted to this kludge because there is no smbmount and I failed to get busybox mount to work for remote cifs. As you can see, I use the -o parameter to snx_snapshot to write jpg to /tmp/www in the modified rtsp server start script

Updated version to leave snapshot.jpg available for the web site looks like:

#!/media/mmcblk0p2/data/bin/ash
# ross lazarus me fecit Sept 2017
# fanghack script to send an image to 
# remote NAS for later assembly into a
# time lapse movie
# renames latest so can be viewed at fanghacks web server as (in my case) http://192.168.1.107/snapshot.jpg
SNAPINT=87 # plus POSTINT = 90 secs
POSTINT=3
KEEPME="/tmp/www/snapshot.jpg"
cd /tmp
while true
do 
 rm /tmp/www/*.jpg
 rm /tmp/www/snaplist.txt
 touch /tmp/snapshot_en 
 # trigger snx_snapshot process to make a new snapshot
 sleep $POSTINT
# make sure the snapshot is done
 DIRE=`date +"tent_%Y_%m_%d"`
 smbclient  \\\\192.168.1.9\\private [password here] -U guest <<ENDIT
prompt
lcd www
cd tent
mkdir $DIRE
cd $DIRE
mput *.jpg
quit
ENDIT
 # rename latest snapshot
 fn=$(ls -c /tmp/www/*.jpg | head -n1)
 mv -f -- "$fn" $KEEPME
 chmod ugo+rx $KEEPME
 # fix permissions so can be viewed
 sleep $SNAPINT
done

These jpg's can then be joined in the usual way on the server hosting the daily directories of images using mencoder- eg

#!/bin/bash
# join all frames
mencoder mf://*/*.jpg -mf fps=20:type=jpg:h=720:w=1280 -ovc lavc -lavcopts   vcodec=mpeg4:mbd=2:trell -oac copy -o test.avi 
yapa69 commented 6 years ago

"Copy the file snx_snapshot on your cam into /media/mmcblk0p2/data/usr/bin/"

And don't forget (like me) chmod+x

mikkel75 commented 6 years ago

@samtap How's the release coming of the new rtsp server? ;)

Mazo commented 6 years ago

@samtap Also interested in the new RTSP server - it looks like there's a new build of snx_rtsp_server at https://github.com/haoweilo/RTSP_stream_server, not sure if that would fix the issue I'm having but trying to use the included snx_rtsp_server with Milestone XProtect just results in a constant RTSP SETUP, RTSP PLAY, RTSP TEARDOWN loop every few seconds.

samtap commented 6 years ago

It's going very slowly but holidays are coming up so hopefully I'll be able to do a new release then @Mazo That sounds like a client issue, new server will still be based on live555 so if your client doesn't cooperate with that there's not much I can do.

utya1988 commented 6 years ago

Can i get snaphot (jpeg picture) using ffmpeg on camera?

KoljaWindeler commented 6 years ago

Hi thanks a lot @RiRomain your snapshot app is exactly what i needed to integrate the cam into home assistant. I've used it for a few hours and it seamed to work fine but the update rate slowed down over night so I tried to figure out whats causing this behavior.

It used to save one frame per second but slowed down to >10sec per frame (https://owncloud.illuminum.de/index.php/s/s62aOTgGceFhf8m) I found that the/tmp/snaplist.txt was hugh, and that there was one line in the file per frame that was saved (all showing the same filename). this file was dumped on the console (I saw thousands of line when i connected via uart) so i wrote a little service to delete the file every five second and it was working stable and fast for some days.

The second issue that I've seen was this: https://owncloud.illuminum.de/index.php/s/Oeg5NsgqnsvkP6O https://owncloud.illuminum.de/index.php/s/babgecMguTwskK7 sorry for scaling them down .. but i guess you can see that the frames contain some artifacts. I guess this is due to fact that I've requested the image just in that moment when it was overwritten. I admit that this happens very rarly but it was kind of annoying when I've "streamed" the snaps.

Third thing that I'd like to have is OSD and i know that I cat activate the date OSD (see wiki) but I was looking for some extras likecam name and localtime.

So long story shot: I've modified your work a bit:

  1. removed the "system(cat /tmp/snaplist.txt))" call to stop spamming the console https://github.com/KoljaWindeler/XN986/commit/8551616827729e56a5bf3f117f48d8ae2ceb96ca#diff-c55cba8a8a5802c2e4a91f99ddf93155R468

  2. write to a temp file and rename it once the file is written completely https://github.com/KoljaWindeler/XN986/commit/8551616827729e56a5bf3f117f48d8ae2ceb96ca#diff-c55cba8a8a5802c2e4a91f99ddf93155R458

  3. add overlay with custom text https://github.com/KoljaWindeler/XN986/commit/7c3cea1598e5fab76363e447af119c6779b958db stolen from some other code

result: https://owncloud.illuminum.de/index.php/s/uAYmKiH9iNGgVtI

and the binary is here https://github.com/KoljaWindeler/XN986/raw/master/snx_sdk/app/example/src/ipc_func/snapshot/snx_snapshot

Hope this helps others.

Rein gehauen, Kolja

edit: new parameter

-a          add cam name to OSD
-e          overlay on/off (1/0) (default is 1)
-x          overlay x-position (default is -1 = center)
-z          overlay y-position (default is 0)

e.g.: snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 -a Entrance -z 5 >$LOG 2>&1 &

KoljaWindeler commented 6 years ago

Short question (likely not in the right place) .. is there anyone who would be interested in motion detection in parallel to these jpg frames? I''ve added motion detection to the snap program and 2 external commands that one can set when starting the command.

E.g. ./snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 -a Entrance -z 5 -b "echo go >> /tmp/log; date >> /tm p/log" -c "echo stop >> /tmp/log; date >> /tmp/log"

This seams to work surprisingly well. No the next task is to find a mqtt client that will run on the camera and tell Homeassistant that we're seeing some motion. Home assistant can then grab the frame and send it via e.g. push pullet. As of now I'm doing the same thing with camera+esp8266 but that seams very stupid :).

So: 1) anyone interested? 2) is there a camera compatible mqtt client? (@mhdawson)

Kolja

dvv commented 6 years ago

1) https://github.com/samtap/fang-hacks/wiki/WIP:-Motion-detection 2) mosquitto_pub from https://github.com/shadow-1/yi-hack-v3/issues/130 should fit

FYI I also modified snx_snapshot source so that it calls ./send.sh FILENAME on every picture taken. This allows both to i) publish snapshots; ii) control snapshots rate. E.g.:

mosquitto_pub ... -t ... -f "$1"
rm -f "$1"
exec usleep 250000 # allow circa 4 FPS
KoljaWindeler commented 6 years ago

Hi, that's a fancy approach to control the fps.

I've tried your command from https://github.com/samtap/fang-hacks/wiki/WIP:-Motion-detection before but I seams like it requires access to the video device, which isn't available when I run the snx_snapshot. Is that correct? So I can only have one of both, snapshots or motion detection at the time? At least that was the reason why I've integrated the motion detection into my version of snx_snapshot.

Apart from that: thanks for mosquitto :+1: Kolja

dvv commented 6 years ago

Right. I stopped using snx_isp_md because of this issue and because it emits false positives in twilight when the picture becomes noisy. The solution would be to increase threshold but I'd better use my own motion detector. FYI my version of snx_snapshot in https://github.com/samtap/fang-hacks/issues/305

KoljaWindeler commented 6 years ago

I see, so how are you detecting motion now? Using the build-in snx motion detection in your snx_snapshot like I do it at the moment or have you integrated your motion detection outside in your plain c code that runs on the camera or even further outside like a different PC?

As of now you're calling mosquitto pub on every frame (at 4fps) so you actually send every frame per wifi, right? I'd guess that this consumes quite a bit of wifi bandwidth, doesn't it?

My plan is to send a mqtt message to Home assistant whenever there is motion and let Home assistant grab the frames on incoming mqtt message.

Kolja

PS love the two way audio that you've integrated. will test tonight :+1:

dvv commented 6 years ago

I retain 4 fps via mosquitto_pub and relay motion detection to a custom python opencv app (sources are hairy so code is private). I see no signficant wifi load at all.

I run node-red for automation (my devices are all custom ones) and am very happy with it.

Have fun! )

dvv commented 6 years ago

https://github.com/samtap/fang-hacks/issues/305#issue-291077419 -- a shell MJPEG streamer inside

mhdawson commented 6 years ago

@KoljaWindeler I do have an mqtt client running, I'm just back from holidays catching up but I'll try to dig up the details in the new few days.

KoljaWindeler commented 6 years ago

Thanks, is it something in the direction of mosquito_pub? I'm using the binary that @dvv posted and they work perfectly.

Currently I have added many more options to my branch, e.g.: On first motion frame execute command A, if there are more than N frames with motion, execute command B. As soon as M frames without motion are captured, run command C.

This way I send "warn" instantly, "alarm" after 2 frames and "clear" after 5 no-motion frames. I've placed the camera next to one of my esp8266 that is connected to a PIR to report motion via mqtt and log both (cam+pir). I've seen lots of false-positiv triggers from the cam and therefore increased the threshold from 320 to 400 pixels. This seams to work quite well, but I'll leave it running for a week or so, before I'll decide the final values.

Kolja

mhdawson commented 6 years ago

@KoljaWindeler looks like I compiled libpaho-mqtt3cs.so and used that.

Source is available here:

https://github.com/eclipse/paho.mqtt.c

And example of how I used it is: https://github.com/mhdawson/PIWebcamServer/tree/SN986

KoljaWindeler commented 6 years ago

Hi, just to round this thing up, setting 400 as motion detection threshold works fairly well for me.

My setup is doing the following things: 1) Send a mqtt message as soon as there is motion: "warn" 2) Send a mqtt message when there is motion in 2 frames in a row: "alarm: 3) Send a mqtt message when there is no motion for 5 frames: "off" 4) Store the last 1000 frames with motion on the local SD card 5) Add an OSD with cam name an localtime and "M" if there is motion

Currently I'm calling snx_snapshot like this:

snx_snapshot -m -q 40 -n 1 -T 400 -W 1920 -H 1080 -N CAM3 -Y 5 -l "mosquitto_pub -h 192.168.2.84 -u MQTT_USER -P MQTT_PASSWORD -t cam3/r/motion -m ON" -b "mosquitto_pub -h 192.168.2.84 -u MQTT_USER -P MQTT_PASSWORD -t cam3/r/motion -m WARN" -c "mosquitto_pub -h 192.168.2.84 -u MQTT_USER -P MQTT_PASSWORD -t cam3/r/motion -m OFF" -M "/media/mmcblk0p2/data/opt/m_cp.sh" >$LOG 2>&1 &

m_cp.sh:

#!/bin/sh
DIR=/media/mmcblk0p2/data/opt/snaps;
mkdir -p $DIR >/dev/null 2>&1;
cd $DIR;
ls -A1t  | sed -e '1,1000d' | xargs rm >/dev/null 2>&1;
cp /tmp/www/snapshot.jpg $(date +"%Y%m%d_%H%M%S").jpg

Here are the options (a few a new) for snx_screenshot:


Usage: snx_snapshot [options]/n
Version: V0.1.2
Options:
    -h      Print this message
    -m      m2m path enable (default is Capture Path)
    -o      outputPath (default is /tmp)
    -i      isp fps (Only in M2M path, default is 30)
    -f      codec fps (default is 30 fps, NOT more than M2M path)
    -W      Capture Width (Default is 1280, depends on M2M path)
    -H      Capture Height (Default is 720, depends on M2M path)
    -q      JPEG QP (Default is 60)
    -n      Num of Frames to capture (Default is 3)
    -s      scaling mode (default is 1,  1: 1, 2: 1/2, 4: 1/4 )
    -r      YUV data output enable
    -v      YUV capture rate divider (default is 5)
    -T      Motion detection threshold (default is 320)
    -j      Num of no motion frames before calling motion end command (default is 5)
    -k      Num of motion frames before calling motion start command (default is 2)
    -t      Command to execute on each frame (default is none)
    -b      Command to execute on motion instantly (default is none)
    -M      Command to execute on each motion frames (default is none)
    -l      Command to execute after '-k' motion frames (default is none)
    -c      Command to execute afer '-j' no motion frames (default is none)
    -N      Cam name for OSD
    -e      Overlay on/off (1/0) (default is 1)
    -X      Overlay x-position (default is -1 = center)
    -Y      Overlay y-position (default is 0)
    -C      Overlay color (default is 0x00FF00)
    -u      Delay between snapshots [ms] (default is 1000)
    M2M Example:   snx_snapshot -m -i 30 -f 30 -q 120 /dev/video1
    capture Example:   snx_snapshot -n 1 -q 120 /dev/video1

This works as good as a PIR, apart from the effect that it reports motion whenever I turn the lights off in the room ( message "no motion" -> lights turn off -> message "warm" -> lights turn on .. ) but that's something that I'll solve in HomeAssistant

https://github.com/KoljaWindeler/XN986/blob/master/snx_sdk/app/example/src/ipc_func/snapshot/snx_snapshot

Kolja

eberkund commented 6 years ago

Is it possible to interact with the camera over USB, or just WiFi?

fubar2 commented 6 years ago

Is it possible to interact with the camera over USB, or just WiFi?

wifi only - the xioafang usb port is for attaching storage AFAIK.

russellhq commented 6 years ago

Is there a way to enable "Y only output" and also what does the YUV rate divider do? Last one, I can't find the yuv files when I enable YUV Output with -r. Where are these saved?