ikalchev / HAP-python

A python implementation of the HomeKit Accessory Protocol (HAP)
Other
619 stars 118 forks source link

Camera accessory #53

Closed gerswin closed 5 years ago

gerswin commented 6 years ago

is possible?

warcanoid commented 5 years ago

It is someone playing with this on Hassio? Have installed hap_python to 2.4.1 and configured camera.py and camera_main.py with my ip camera address and port and no camera!? What am i doing wrong? How to start: python3 camera_main.py on hassio? Added ffmpeg: in configuration.yaml. Will python script help?

ikalchev commented 5 years ago

How to start: python3 camera_main.py on hassio?

I haven't tried hassio yet and unfortunately I don't know how the integration is going. But if you get any linux distro and just do python3 camera_main.py it will at least fail in some meaningful way. Setting the exact ffmpeg command could be tricky, as it depends on what is your hardware and software.

warcanoid commented 5 years ago

How to start: python3 camera_main.py on hassio?

I haven't tried hassio yet and unfortunately I don't know how the integration is going. But if you get any linux distro and just do python3 camera_main.py it will at least fail in some meaningful way. Setting the exact ffmpeg command could be tricky, as it depends on what is your hardware and software.

/config/deps/lib/python3.6/site-packages/pyhap/camera_main.py /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 7: $'An example of how to setup and start an Accessory.\n\nThis is:\n1. Create the Accessory object you want.\n2. Add it to an AccessoryDriver, which will advertise it on the local network,\n setup a server to answer client queries, etc.\n': command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 8: import: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 9: import: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 11: from: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 12: from: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 14: syntax error near unexpected token level=logging.INFO,' /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 14:logging.basicConfig(level=logging.INFO, format="[%(module)s] %(message)s")'

RefineryX commented 5 years ago

I am not sure I get it? Is this actually working and implemented into the latest release?

Dav0815 commented 5 years ago

This works great on my Raspberry PI with a Playstation Eye camera attached via USB. @SimplyGardner , it runs currently as its own accessory (you need to add it in Homekit) and is not yet implemented through HA.

For anyone in a similar environment, here are my steps to get this up and running:

  1. Create a folder to install it (in my case /var/lib/hap-python)
  2. Create a virtual Python environment and a dedicated user for this
  3. Install HAP Python with pip3 install HAP-python
  4. Create a camera_main.py file.
  5. Run python3 camera_main.py and see if any errors come up in the log. If not, go ahead and add the accessory in homekit (code is shown in one of the first lines during startup).
  6. Optional, start the camera_main.py through a systemctl script

Here is my camera_main.py . Hope it helps you to get started.

"""Implementation of a HAP Camera
Modifications for current system:
FILE_SNAPSHOT   = '/tmp/snapshot.jpg'
FILE_PERSISTENT = '/var/lib/hap-python/accessory.state'
DEV_VIDEO       = '/dev/video0'
SCALE       = '640x480'
IP_ADDRESS      = '192.168.1.2'

Note that the snapshot function adds a timestamp to the last image.
The font location has to be updated according to your system.
"""
import logging
import signal
import subprocess

from pyhap.accessory_driver import AccessoryDriver
from pyhap import camera

logging.basicConfig(level=logging.INFO, format="[%(module)s] %(message)s")

FILE_SNAPSHOT   = '/tmp/snapshot.jpg'
FILE_PERSISTENT = '/var/lib/hap-python/accessory.state'
DEV_VIDEO       = '/dev/video0'
SCALE           = '640x480'
IP_ADDRESS      = '192.168.1.2'

# Specify the audio and video configuration that your device can support
# The HAP client will choose from these when negotiating a session.
options = {
    "video": {
        "codec": {
            "profiles": [
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["BASELINE"],
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["MAIN"],
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["HIGH"]
            ],
            "levels": [
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_1'],
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_2'],
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE4_0'],
            ],
        },
        "resolutions": [
            # Width, Height, framerate
            [320, 240, 15],  # Required for Apple Watch
            [640, 480, 30],
        ],
    },
    "audio": {
        "codecs": [
            {
                'type': 'OPUS',
                'samplerate': 24,
            },
            {
                'type': 'AAC-eld',
                'samplerate': 16
            }
        ],
    },
    "srtp": True,
    "address": IP_ADDRESS,
    "start_stream_cmd":  (
      'ffmpeg -re -f video4linux2  -i ' + DEV_VIDEO + ' -threads 0 '
      '-vcodec libx264 -an -pix_fmt yuv420p -r {fps} '
      '-f rawvideo -tune zerolatency '
      '-vf scale=' + SCALE + ' -b:v {v_max_bitrate}k -bufsize {v_max_bitrate}k '
      '-payload_type 99 -ssrc {v_ssrc} -f rtp '
      '-srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params {v_srtp_key} '
      'srtp://{address}:{v_port}?rtcpport={v_port}&'
      'localrtcpport={v_port}&pkt_size=1378'),
}

class HAPCamera(camera.Camera):
    def get_snapshot(self, image_size):  # pylint: disable=unused-argument, no-self-use
        """Return a jpeg of a snapshot from the camera.
        Overwritten to store the snapshot in a central place
        """
        file_snapshot = '/tmp/snapshot.jpg'
        cmd = ['ffmpeg', '-f', 'video4linux2', '-i', DEV_VIDEO,
               '-update', '1', '-y', '-vframes', '2',
               '-vf', 'drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf: text=\'%{localtime}\': x=(w-tw)/2: y=h-(2*lh): fontcolor=white: fontsize=16',
               '-nostats', '-loglevel', '0', FILE_SNAPSHOT]
        returncode = subprocess.run(cmd)
        with open(FILE_SNAPSHOT, 'rb') as fp:
            return fp.read()

# Start the accessory on port 51826
driver = AccessoryDriver(port=51826, persist_file=FILE_PERSISTENT)
acc = HAPCamera(options, driver, "Camera")
driver.add_accessory(accessory=acc)

# We want KeyboardInterrupts and SIGTERM (terminate) to be handled by the driver itself,
# so that it can gracefully stop the accessory, server and advertising.
signal.signal(signal.SIGTERM, driver.signal_handler)
# Start it!
driver.start()
RefineryX commented 5 years ago

Thank you sir.

I use Home Assistant - am I right in thinking this is not integrated with this yet? If not, anything I can do to intergrate?

Dav0815 commented 5 years ago

You are right, it is not yet integrated with Home Assistant. The example above runs it as a separate process on my Raspberry.

RefineryX commented 5 years ago

Dang. I'd rather wait until its integrated rather than running a separate process. Any idea on time frames for this?

Dav0815 commented 5 years ago

Nope, I am not the developer for that part.

RefineryX commented 5 years ago

Rekon I could just copy + paste the required code and add it manually into the Home Assistant file?

Dav0815 commented 5 years ago

Home Assistant does not have the drivers yet. So, unless you know how to use the code above you will have to wait for the integration.

RefineryX commented 5 years ago

Will keep an eye on this tread and wait patiently 👍

warcanoid commented 5 years ago

This works great on my Raspberry PI with a Playstation Eye camera attached via USB. @SimplyGardner , it runs currently as its own accessory (you need to add it in Homekit) and is not yet implemented through HA.

For anyone in a similar environment, here are my steps to get this up and running:

  1. Create a folder to install it (in my case /var/lib/hap-python)
  2. Create a virtual Python environment and a dedicated user for this
  3. Install HAP Python with pip3 install HAP-python
  4. Create a camera_main.py file.
  5. Run python3 camera_main.py and see if any errors come up in the log. If not, go ahead and add the accessory in homekit (code is shown in one of the first lines during startup).
  6. Optional, start the camera_main.py through a systemctl script

Here is my camera_main.py . Hope it helps you to get started.

"""Implementation of a HAP Camera
Modifications for current system:
FILE_SNAPSHOT   = '/tmp/snapshot.jpg'
FILE_PERSISTENT = '/var/lib/hap-python/accessory.state'
DEV_VIDEO       = '/dev/video0'
SCALE     = '640x480'
IP_ADDRESS      = '192.168.1.2'

Note that the snapshot function adds a timestamp to the last image.
The font location has to be updated according to your system.
"""
import logging
import signal
import subprocess

from pyhap.accessory_driver import AccessoryDriver
from pyhap import camera

logging.basicConfig(level=logging.INFO, format="[%(module)s] %(message)s")

FILE_SNAPSHOT   = '/tmp/snapshot.jpg'
FILE_PERSISTENT = '/var/lib/hap-python/accessory.state'
DEV_VIDEO       = '/dev/video0'
SCALE           = '640x480'
IP_ADDRESS      = '192.168.1.2'

# Specify the audio and video configuration that your device can support
# The HAP client will choose from these when negotiating a session.
options = {
    "video": {
        "codec": {
            "profiles": [
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["BASELINE"],
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["MAIN"],
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["HIGH"]
            ],
            "levels": [
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_1'],
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_2'],
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE4_0'],
            ],
        },
        "resolutions": [
            # Width, Height, framerate
            [320, 240, 15],  # Required for Apple Watch
            [640, 480, 30],
        ],
    },
    "audio": {
        "codecs": [
            {
                'type': 'OPUS',
                'samplerate': 24,
            },
            {
                'type': 'AAC-eld',
                'samplerate': 16
            }
        ],
    },
    "srtp": True,
    "address": IP_ADDRESS,
    "start_stream_cmd":  (
      'ffmpeg -re -f video4linux2  -i ' + DEV_VIDEO + ' -threads 0 '
      '-vcodec libx264 -an -pix_fmt yuv420p -r {fps} '
      '-f rawvideo -tune zerolatency '
      '-vf scale=' + SCALE + ' -b:v {v_max_bitrate}k -bufsize {v_max_bitrate}k '
      '-payload_type 99 -ssrc {v_ssrc} -f rtp '
      '-srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params {v_srtp_key} '
      'srtp://{address}:{v_port}?rtcpport={v_port}&'
      'localrtcpport={v_port}&pkt_size=1378'),
}

class HAPCamera(camera.Camera):
    def get_snapshot(self, image_size):  # pylint: disable=unused-argument, no-self-use
        """Return a jpeg of a snapshot from the camera.
        Overwritten to store the snapshot in a central place
        """
        file_snapshot = '/tmp/snapshot.jpg'
        cmd = ['ffmpeg', '-f', 'video4linux2', '-i', DEV_VIDEO,
               '-update', '1', '-y', '-vframes', '2',
               '-vf', 'drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf: text=\'%{localtime}\': x=(w-tw)/2: y=h-(2*lh): fontcolor=white: fontsize=16',
               '-nostats', '-loglevel', '0', FILE_SNAPSHOT]
        returncode = subprocess.run(cmd)
        with open(FILE_SNAPSHOT, 'rb') as fp:
            return fp.read()

# Start the accessory on port 51826
driver = AccessoryDriver(port=51826, persist_file=FILE_PERSISTENT)
acc = HAPCamera(options, driver, "Camera")
driver.add_accessory(accessory=acc)

# We want KeyboardInterrupts and SIGTERM (terminate) to be handled by the driver itself,
# so that it can gracefully stop the accessory, server and advertising.
signal.signal(signal.SIGTERM, driver.signal_handler)
# Start it!
driver.start()

thank you!!!!!! will try on hassio if it will work

warcanoid commented 5 years ago

@Dav0815 I have problems, i have a wifi camera not attached like yours, but i get many errors: core-ssh:~# /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 11: Implementation of a HAP Camera Modifications for current system: FILE_SNAPSHOT = '/tmp/snapshot.jpg' FILE_PERSISTENT = '/config/deps/lib/python3.6/site-packages/pyhap/accessory.state' DEV_VIDEO = '/dev/video0' SCALE = '640x480' IP_ADDRESS = '192.168.1.3'

Note that the snapshot function adds a timestamp to the last image. The font location has to be updated according to your system. : No such file or directory /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 12: import: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 13: import: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 14: import: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 16: from: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 17: from: command not found /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 19: syntax error near unexpected token level=logging.INFO,' /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py: line 19:logging.basicConfig(level=logging.INFO, format="[%(module)s] %(message)s")'

@ikalchev maybe you know how to fix this errors?

I can do this, but no log?: core-ssh:~# #!/bin/python3 /config/deps/lib/python3.6/site-packages/pyhap/camera_main.py core-ssh:~#

Dav0815 commented 5 years ago

Hi @warcanoid , looks like general Python errors. Make sure that any test program like hello world works in that environment and then go from there.

I also recommend that you first test the ffmpeg command on its own and then adapt the parameters in the camera_main.py accordingly.

My solution works for me, but it took me some time to debug and learn more about Python. Definitely nothing you just quickly install and it works.

warcanoid commented 5 years ago

@Dav0815 Thanks for helping. I think I have problem with input or dev video?!

"""Implementation of a HAP Camera Modifications for current system: FILE_SNAPSHOT = '/tmp/snapshot.jpg' FILE_PERSISTENT = '/config/python_scripts/accessory.state' DEV_VIDEO = 'rtsp://192.168.1.3:554/h264hd.sdp' ### **tried also with username and password and http stream_** SCALE = '640x480' IP_ADDRESS = '192.168.1.3'

Note that the snapshot function adds a timestamp to the last image. The font location has to be updated according to your system. """ import logging import signal import subprocess

from pyhap.accessory_driver import AccessoryDriver from pyhap import camera

logging.basicConfig(level=logging.INFO, format="[%(module)s] %(message)s")

FILE_SNAPSHOT = '/tmp/snapshot.jpg' FILE_PERSISTENT = '/config/python_scripts/accessory.state' DEV_VIDEO = 'rtsp://192.168.1.3:554/h264_hd.sdp' SCALE = '640x480' IP_ADDRESS = '192.168.1.3'

Specify the audio and video configuration that your device can support

The HAP client will choose from these when negotiating a session.

options = { "video": { "codec": { "profiles": [ camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["BASELINE"], camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["MAIN"], camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["HIGH"] ], "levels": [ camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_1'], camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_2'], camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE4_0'], ], }, "resolutions": [

Width, Height, framerate

        [320, 240, 15],  # Required for Apple Watch
        [640, 480, 30],
    ],
},
"audio": {
    "codecs": [
        {
            'type': 'OPUS',
            'samplerate': 24,
        },
        {
            'type': 'AAC-eld',
            'samplerate': 16
        }
    ],
},
"srtp": True,
"address": IP_ADDRESS,
"start_stream_cmd":  (
  'ffmpeg -re -f video4linux2  -i ' + DEV_VIDEO + ' -threads 0 '
  '-vcodec libx264 -an -pix_fmt yuv420p -r {fps} '
  '-f rawvideo -tune zerolatency '
  '-vf scale=' + SCALE + ' -b:v {v_max_bitrate}k -bufsize {v_max_bitrate}k '
  '-payload_type 99 -ssrc {v_ssrc} -f rtp '
  '-srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params {v_srtp_key} '
  'srtp://{address}:{v_port}?rtcpport={v_port}&'
  'localrtcpport={v_port}&pkt_size=1378'),

}

class HAPCamera(camera.Camera): def get_snapshot(self, image_size): # pylint: disable=unused-argument, no-self-use """Return a jpeg of a snapshot from the camera. Overwritten to store the snapshot in a central place """ file_snapshot = '/tmp/snapshot.jpg' cmd = ['ffmpeg', '-f', 'video4linux2', '-i', DEV_VIDEO, '-update', '1', '-y', '-vframes', '2', '-vf', 'drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf: text=\'%{localtime}\': x=(w-tw)/2: y=h-(2*lh): fontcolor=white: fontsize=16', '-nostats', '-loglevel', '0', FILE_SNAPSHOT] returncode = subprocess.run(cmd) with open(FILE_SNAPSHOT, 'rb') as fp: return fp.read()

Start the accessory on port 51826

driver = AccessoryDriver(port=51826, persist_file=FILE_PERSISTENT) acc = HAPCamera(options, driver, "Camera") driver.add_accessory(accessory=acc)

We want KeyboardInterrupts and SIGTERM (terminate) to be handled by the driver itself,

so that it can gracefully stop the accessory, server and advertising.

signal.signal(signal.SIGTERM, driver.signal_handler)

Start it!

driver.start()

Get same error as before, or error in HA log:

Error executing script: import not found Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/homeassistant/components/python_script.py", line 166, in execute exec(compiled.code, restricted_globals, local) File "camera_main.py", line 12, in ImportError: import not found

I have also added ffmpeg: in configuration.yaml, the added camera: - platform: ffmpeg - input: rtsp://username:password@192.168.1.3:554/h264_hd.sdp, I get screenshots in HA, but stream do not work, i get ? sign.

I searched the error and found this to add to the script: #!/usr/bin/env python3, do not worked-no such file or directory, the with #!/usr/bin/env is started over ssh, but i must quit it with Ctrl+c.

In usr/bin/env i found this: 'utf-8' codec can't decode byte 0xb7 in position 18: invalid start byte Maybe i must change the encoding at snapshot text?

Can I change this line: logging.basicConfig(level=logging.INFO, format="[%(module)s] %(message)s") ?

aptonline commented 5 years ago

@cdce8p did this every get finished for integration into HA?

RefineryX commented 5 years ago

I really, really hope soo

cdce8p commented 5 years ago

Unfortunately, not yet. I'm still quite busy 😕

warcanoid commented 5 years ago

@cdce8p would donations make it faster?

cdce8p commented 5 years ago

@warcanoid Not really. I'm a university student and will have to study for a few more exams this term. For me this has priority above anything else. I'll have some spare time towards the end of March, but I can't promise anything.

RefineryX commented 5 years ago

University is far more important. Is there any one else that has the ability to do it?

warcanoid commented 5 years ago

What if we make a tutorial together for this script from @ikalchev, and use it all, without implemntion.

RefineryX commented 5 years ago

What do you mean @warcanoid

Dav0815 commented 5 years ago

@warcanoid you mean like the script I put in the post above ?

The camera is now fully supported as a Homekit device. It is not yet integrated with Homeassistant, but it does not stop you adding a camera to Homekit via the direct way.

BUT, you have to know what you are doing as you this is a bit more than just adding a config parameter into HomeAssistant.

shuaiger commented 5 years ago

is there any straightforward tutorial for the direct implementation? thx

Jones On 03/17/2019 13:40, Dav0815 wrote:

@warcanoid you mean like the script I put in the post above ?

The camera is now fully supported as a Homekit device. It is not yet integrated with Homeassistant, but it does not stop you adding a camera to Homekit via the direct way.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Dav0815 commented 5 years ago

is there any straightforward tutorial for the direct implementation? thx Jones

https://github.com/ikalchev/HAP-python/issues/53#issuecomment-451646605

shuaiger commented 5 years ago

thanks, but don't know if it support x86-64 platform

Jones On 03/17/2019 14:05, Dav0815 wrote:

is there any straightforward tutorial for the direct implementation? thx Jones

53 (comment)

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Dav0815 commented 5 years ago

Dont know why it should not as long as you install the prerequisites.

shuaiger commented 5 years ago

will try it out, thanks again

Jones On 03/17/2019 14:11, Dav0815 wrote:

Dont know why it should not as long as you install the prerequisites.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

warcanoid commented 5 years ago

Do we will get the camera accessory?

tokamac commented 5 years ago

Why this has been "closed" if its has not been merged and implemented yet?

ikalchev commented 5 years ago

@tokamac HAP-python is a completely separate project from Home Assistant.

tokamac commented 5 years ago

OK understood. But doesn't Home Assistant integrate HAP-Python (pyhap)? Does it mean that HA needs to upgrade the integrated version of HAP-python?

ikalchev commented 5 years ago

Yes, someone on HA side needs to make the necessary changes. I guess the problem is that because the stream settings are hard to get right for the platform/camera at hand, it hasn’t been done (and probably won’t be done for a general case, I imagine it will be per use case). I myself had a lot of trouble implementing this without HA (and without HAP-python to be honest - plain ffmpeg). If you want to help in that effort, you can talk with @cdce8p I think

cdce8p commented 5 years ago

@tokamac I would like to implement it, however we all spend our little spare time on those projects (HA and/or HAP-python respectively). Take this together with such a complex feature and it's bound to take time. If you (or anybody else for that matter) would like to give it a try, I would gladly offer my help. Until then this feature might be implemented if and when I find the time to do it.

tokamac commented 5 years ago

@cdce8p Fine :) I use Home Assistant on a Raspberry Pi 3 Model B (not hass.io but Hassbian), latest version based on Stretch (currently 0.92). I upgraded to Python 3.7. I still have Python 2.7 also. I just received a 1080p USB camera from ELP (ref. ELP-USBFHD06H-L37 which has a 1/2.9" Sony IMX322 sensor. It is advertised as outputting either MJPEG or H.264. I compiled latest ffmpeg with hardware (GPU) accelerations flags including OMX. I plan to go either the motion/motioneye route with mqtt, or pyHAP with custom camera_main.py as some users here have reported success. But first before anything else, let's focus on the camera on the hardware side, and ffmpeg alone on the software side.

First things to note, I tested a few command lines with the camera plugged in:

$ lsusb
Bus 001 Device 004: ID 05a3:9422 ARC International

The camera created a few files, including:

/dev/v4l/by-id/usb-Sonix_Technology_Co.__Ltd._H264_USB_Camera_SN0001-video-index0
/dev/v4l/by-id/usb-Sonix_Technology_Co.__Ltd._H264_USB_Camera_SN0001-video-index1
/dev/v4l/by-id/usb-Sonix_Technology_Co.__Ltd._H264_USB_Camera_SN0001-video-index2
/dev/v4l/by-id/usb-Sonix_Technology_Co.__Ltd._H264_USB_Camera_SN0001-video-index3

I don't know if these files are relevant, but just in case. I continue:

$ ffmpeg -devices
ffmpeg version N-93671-ge627113329 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1+deb9u1) 20170516
  configuration: --arch=armel --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-mmal --enable-hwaccels --enable-nonfree
  libavutil      56. 26.100 / 56. 26.100
  libavcodec     58. 52.100 / 58. 52.100
  libavformat    58. 27.103 / 58. 27.103
  libavdevice    58.  7.100 / 58.  7.100
  libavfilter     7. 49.100 /  7. 49.100
  libswscale      5.  4.100 /  5.  4.100
  libswresample   3.  4.100 /  3.  4.100
  libpostproc    55.  4.100 / 55.  4.100
Devices:
 D. = Demuxing supported
 .E = Muxing supported
 --
 DE alsa            ALSA audio output
 DE fbdev           Linux framebuffer
 D  lavfi           Libavfilter virtual input device
 DE oss             OSS (Open Sound System) playback
  E sdl,sdl2        SDL2 output device
 DE sndio           sndio audio playback
 DE video4linux2,v4l2 Video4Linux2 output device
 D  x11grab         X11 screen capture, using XCB
  E xv              XV (XVideo) output device

and

$ v4l2-ctl --list-devices
bcm2835-codec (platform:bcm2835-codec):
    /dev/video10
    /dev/video11
    /dev/video12

H264 USB Camera: USB Camera (usb-3f980000.usb-1.2):
    /dev/video0
    /dev/video1
    /dev/video2
    /dev/video3

What "bcm2835" refer to? Is it some thing from the camera, or a chip in the pi? Then I do:

$ ffmpeg -f v4l2 -list_formats all -i /dev/video0
[video4linux2,v4l2 @ 0x24d3220] Compressed:       mjpeg :          Motion-JPEG : 1920x1080 1280x720 800x600 640x480 640x360 352x288 320x240 1920x1080
[video4linux2,v4l2 @ 0x24d3220] Raw       :     yuyv422 :           YUYV 4:2:2 : 640x480 800x600 640x360 352x288 320x240 640x480

Here I am a bit confused: This is about /dev/video0. I see a HD stream but it seems to be Motion-JPEG, not H.264. And another one, which is raw yuyv422, and seems limited to 4:3 aspect ratio only and 800x600 max resolution :-/

Where are references to a H.264 high definition possibilities?

I am a bit confused at this stage. I was hoping to use the plain-vanilla H.264 output of the camera with -vcodec copy flag to stream it over WLAN without toutching it, i.e. without having to recompress it through ffmpeg, so with maximum quality and framerate performance.

Second question: can I consider (if this eventually happens to work for one stream) using two H.264 streams at the same time, one for motion(eye) with recording triggered by motion detection, and the other as an on-demand stream for the HomeKit Home app?

cdce8p commented 5 years ago

@tokamac I should probably have been a bit more specific :sweat_smile: I don't have any knowledge about anything related to ffmpeg and stream video with python, yet. What I can do is help at the interface between Home Assistant and HAP-python and the review process for Home Assistant, however I do believe that, at least at the moment, the current HAP-python implementation of the Camera is incompatible with Home Assistant. The reason why I think so is that the camera accessory will create a new subprocess for the stream. That works fine for HAP-python (alone). However, since for Home Assistant HAP-python doesn't run in the main thread, this probably won't work for us. As I've said I haven't tested it, so I might be wrong.

All together this will probably require a lot of tinkering and a lot of time, something I don't have at the moment.

RefineryX commented 5 years ago

@cdce8p I maybe thinking out loud here by did Home Assistant home get a live camera stream feature recently - could we tap into that for this to work?

tokamac commented 5 years ago

@cdce8p Ah! Ok… I answer my own question as I found in the meantime: for my USB cam, /dev/video0 is for the MJPEG stream and /dev/video2 for the H.264 stream.

@SimplyGardner Yes as from 0.90 (especially 0.92.2 I think with some bugs chased) HA has implemented live camera stream in cards, see https://www.home-assistant.io/blog/2019/04/24/release-92/#lovelace-streams-cameras

tagdara commented 5 years ago

at least at the moment, the current HAP-python implementation of the Camera is incompatible with Home Assistant. The reason why I think so is that the camera accessory will create a new subprocess for the stream. That works fine for HAP-python (alone). However, since for Home Assistant HAP-python doesn't run in the main thread, this probably won't work for us. As I've said I haven't tested it, so I might be wrong.

I don't use Home Assistant, but my own system has the same challenge with not running the hap-python component in the main thread. I ended up using an executor against a loop with asyncio.get_child_watcher(), and then doing executor.submit(self.driver.start) and it works fine.

tokamac commented 5 years ago

I got it working right for my USB camera! Both snapshot and video stream in the Home app on my iPhone :)

I modified @warcanoid's camera_main.py with the following options:

Can someone explain the packet size, and if I should rather stick to 1378 as in examples above?

I also made minor changes to the text inscribed at the bottom of the still preview (syntax according to my country and bigger font as the standard size was too small). I have still a minor issue to include : between hours minutes and seconds without breaking the code [EDIT: solved, just use %X instead of %H:%M%S)

Here is my main_camera.py file:

"""Implementation of a HAP Camera
Modifications for current system:
FILE_SNAPSHOT   = '/tmp/snapshot.jpg'
FILE_PERSISTENT = '/var/lib/hap-python/accessory.state'
DEV_VIDEO       = '/dev/video0'
SCALE       = '640x480'
IP_ADDRESS      = '192.168.1.2'

Note that the snapshot function adds a timestamp to the last image.
The font location has to be updated according to your system.
"""
import logging
import signal
import subprocess

from pyhap.accessory_driver import AccessoryDriver
from pyhap import camera

logging.basicConfig(level=logging.INFO, format="[%(module)s] %(message)s")

FILE_SNAPSHOT   = '/tmp/snapshot.jpg'
FILE_PERSISTENT = '/var/lib/hap-python/accessory.state'
DEV_VIDEO       = '/dev/video2'
SCALE           = '1920x1080'
DATE_CAPTION    = '%A %-d %B %Y, %X'
IP_ADDRESS      = '192.168.0.2'

# Specify the audio and video configuration that your device can support
# The HAP client will choose from these when negotiating a session.
options = {
    "video": {
        "codec": {
            "profiles": [
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["BASELINE"],
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["MAIN"],
                camera.VIDEO_CODEC_PARAM_PROFILE_ID_TYPES["HIGH"]
            ],
            "levels": [
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_1'],
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE3_2'],
                camera.VIDEO_CODEC_PARAM_LEVEL_TYPES['TYPE4_0'],
            ],
        },
        "resolutions": [
            # Width, Height, framerate
            [320, 240, 15],  # Required for Apple Watch
            [1920, 1080, 30],
        ],
    },
    "audio": {
        "codecs": [
            {
                'type': 'OPUS',
                'samplerate': 24,
            },
            {
                'type': 'AAC-eld',
                'samplerate': 16
            }
        ],
    },
    "srtp": True,
    "address": IP_ADDRESS,
    "start_stream_cmd":  (
      'ffmpeg -re -f video4linux2 -i ' + DEV_VIDEO + ' -threads 4 '
      '-vcodec h264_omx -an -pix_fmt yuv420p -r {fps} '
      '-b:v 2M -bufsize 2M '
      '-payload_type 99 -ssrc {v_ssrc} -f rtp '
      '-srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params {v_srtp_key} '
      'srtp://{address}:{v_port}?rtcpport={v_port}&'
      'localrtcpport={v_port}&pkt_size=1316'),
}

class HAPCamera(camera.Camera):
    def get_snapshot(self, image_size):  # pylint: disable=unused-argument, no-self-use
        """Return a jpeg of a snapshot from the camera.
        Overwritten to store the snapshot in a central place
        """
        file_snapshot = '/tmp/snapshot.jpg'
        cmd = ['ffmpeg', '-f', 'video4linux2', '-i', DEV_VIDEO,
               '-update', '1', '-y', '-vframes', '2',
               '-vf', 'drawtext=\'fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf: text=%{localtime\:' + DATE_CAPTION + '}\': x=(w-tw)/2: y=h-(2*lh): fontcolor=white: fontsize=42',
               '-nostats', '-loglevel', '0', FILE_SNAPSHOT]
        returncode = subprocess.run(cmd)
        with open(FILE_SNAPSHOT, 'rb') as fp:
            return fp.read()

# Start the accessory on port 51826
driver = AccessoryDriver(port=51826, persist_file=FILE_PERSISTENT)
acc = HAPCamera(options, driver, "Camera")
driver.add_accessory(accessory=acc)

# We want KeyboardInterrupts and SIGTERM (terminate) to be handled by the driver itself,
# so that it can gracefully stop the accessory, server and advertising.
signal.signal(signal.SIGTERM, driver.signal_handler)
# Start it!
driver.start()

cam_gaia

warcanoid commented 5 years ago

@tokamac ohh great work! Do you have ip camera, so we can follow your steps?

RefineryX commented 5 years ago

Is this for integrating HomeKit into Home Assistant?

tokamac commented 5 years ago

Some bug I saw: when my wife attempted to watch the stream on her own iPhone, she got an error message saying "another user is already accessing the same device" (or something along these lines). But the same error message eventually propagated to my iPhone too, so none of us could access the stream anymore, even after a kill & reopen Home.

Obviously this is not how things should behave. A second person trying to access the same stream should not make the first stream bug and hang up, then freeze and hold for everyone afterward! OR the original stream should be properly cut in order to serve the latest device accessing it. I don't know if this kind of bug is related to Homekit or HAP-python (especially hap_server.pyor camera.py?)

Anyway, it could be useful to develop a virtual HomeKit switch that would restart the service (if one makes camera_main running as a service) when triggered, in case of this kind of bug.

tokamac commented 5 years ago

@tokamac ohh great work! Do you have ip camera, so we can follow your steps?

@warcanoid Thanks! :) But no IP camera. I have a USB camera that streams H.264 in HD, this one: ELP-USBFHD06H-L37

@SimplyGardner for now I only try to make it work in Apple Home but nothing would prevent it to work in Home Assistant as a card. I use HA myself, but on the purpose to make all my Z-Wave accessories compatible with HomeKit. Actually in the end I only use Siri and the Home app.

tokamac commented 5 years ago

So, it worked for a day. Now I cannot make the camera work a single time in Home app. It displays the still snapshot, but no more video stream. I didn't change a line in the code, so I really do not understand, and do not know how to debug this (logs?)

I think I have an unresolved issue related to #140 , #176 , #190 . BTW my router is an Airport Extreme 802.11ac so nothing to do with a filtering of Bonjour, more like stalled TCP sockets (?). But why does this persist after power off, I wonder.

tokamac commented 5 years ago
  1. Run python3 camera_main.py and see if any errors come up in the log. If not, go ahead and add the accessory in homekit (code is shown in one of the first lines during startup).

@Dav0815 I'l stuck now because I don't find how to log properly this issue and read the journal log. Can you please tell me the command or the location of such file please?

camow7 commented 1 year ago

I got it working right for my USB camera! Both snapshot and video stream in the Home app on my iPhone :) ... "start_stream_cmd": ( 'ffmpeg -re -f video4linux2 -i ' + DEV_VIDEO + ' -threads 4 ' '-vcodec h264_omx -an -pix_fmt yuv420p -r {fps} ' '-b:v 2M -bufsize 2M ' '-payload_type 99 -ssrc {v_ssrc} -f rtp ' '-srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params {v_srtp_key} ' 'srtp://{address}:{v_port}?rtcpport={v_port}&' 'localrtcpport={v_port}&pkt_size=1316'),

cam_gaia

Just for anyone else that finds this issue. There seems to be problems with the h264_omx codec on the raspberry pi these days. I had to use the h264_v4l2m2m to get this to work. My stream command looks like this:

    "start_stream_cmd": (
        'ffmpeg -re -f video4linux2 -i ' + DEV_VIDEO + ' -threads 4 '
        '-vcodec h264_v4l2m2m -an -pix_fmt yuv420p -r {fps} '
        '-b:v 2M -bufsize 2M '
        '-payload_type 99 -ssrc {v_ssrc} -f rtp '
        '-srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params {v_srtp_key} '
        'srtp://{address}:{v_port}?rtcpport={v_port}&'
        'localrtcpport={v_port}&pkt_size=1316'),