Closed distinctjuggle closed 5 months ago
Here are the logs when setting go2rtc's RTSP log to trace:
Am I doing something wrong? Why do I not see any backchannel audio negotiation?
Maybe ffmeg
doesn't work because it is not ffmpeg
?
Good catch, unfortunately it doesn't fix anything for the cameras in question. I can confirm it works with the one 2 way audio camera that I have.
Are there any other logs I could enable which would be more relevant? Would a trace of the api
log be more applicable given that these cameras are using isapi?
There are no logs that I can find anywhere in relation to this problem. What can I do to troubleshoot this? I'd really like to have this working. Thanks.
I've just started trying to get mine setup. To test whether the camera can receive/play audio you can do the following:
ffmpeg -i duck.wav -ac 1 -ar 8000 -f mulaw duck.ulaw
curl --digest "http://$USER:$PASS@192.168.1.100/ISAPI/System/TwoWayAudio/channels"
You will receive some XML similar to this: `<?xml version="1.0" encoding="UTF-8"?>
`
curl --digest -X PUT "http://$USER:$PASS@192.168.1.100/ISAPI/System/TwoWayAudio/channels/1/close"
curl --digest -X PUT "http://$USER:$PASS@192.168.1.100/ISAPI/System/TwoWayAudio/channels/1/open"
curl --digest -X PUT -H "Content-Length: 0" -H "Content-Type: application/octet-stream" -d @duck.ulaw "http://$USER:$PASS@192.168.1.100/ISAPI/System/TwoWayAudio/channels/1/audioData"
I have not yet successfully be able to get my microphone audio to exit the camera speaker. My goal is to get the HASS Frigate Lovelace Card two-way audio going. The magic webrtc page on the go2rtc dashboard asks for permission to access my microphone, and on my iphone I get my own voice back thorugh the speaker, but no matter what device or interface I use nothing comes out of the actual camera-end yet.
Please post back if you get that working with your Hikvision.
Has anyone figured this out? I am trying to get ISAPI commands to work with Hikvision for two-way audio. The response implies it has worked but no audio is heard.
curl -k -X GET 'http://[ip]/ISAPI/System/TwoWayAudio/channels' --digest -u user:pass
<?xml version="1.0" encoding="UTF-8"?>
<TwoWayAudioChannelList version="2.0" xmlns="http://www.hikvision.com/ver20/XMLSchema">
<TwoWayAudioChannel version="2.0" xmlns="http://www.hikvision.com/ver20/XMLSchema">
<id>1</id>
<enabled>false</enabled>
<audioCompressionType>AAC</audioCompressionType>
<speakerVolume>60</speakerVolume>
<noisereduce>false</noisereduce>
<audioInputType>MicIn</audioInputType>
<audioBitRate>32</audioBitRate>
<audioSamplingRate>32</audioSamplingRate>
</TwoWayAudioChannel>
</TwoWayAudioChannelList>
curl -k -X PUT 'http://[ip]/ISAPI/System/TwoWayAudio/channels/1/open' --digest -u user:pass
<?xml version="1.0" encoding="UTF-8"?>
<TwoWayAudioSession version="2.0" xmlns="http://www.hikvision.com/ver20/XMLSchema">
<sessionId>263430271</sessionId>
</TwoWayAudioSession>
curl -k -H 'Content-Length: 0' -H 'application/octet-stream' -X PUT -d @beeps-4.wav 'http://[ip]/ISAPI/System/TwoWayAudio/channels/1/audioData' --digest -u user:pass
HTTP/1.1 200 OK
Date: Tue, 23 Apr 2024 23:00:58 GMT
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache
Content-Length: 255
Connection: close
Content-Type: application/xml
<?xml version="1.0" encoding="UTF-8"?><ResponseStatus version="2.0" xmlns="http://www.hikvision.com/ver20/XMLSchema">
<requestURL></requestURL>
<statusCode>1</statusCode>
<statusString>OK</statusString>
<subStatusCode>ok</subStatusCode>
</ResponseStatus>
curl -k -X PUT 'http://[ip]/ISAPI/System/TwoWayAudio/channels/1/close' --digest -u user:pass
<?xml version="1.0" encoding="UTF-8"?><ResponseStatus version="2.0" xmlns="http://www.hikvision.com/ver20/XMLSchema">
<requestURL></requestURL>
<statusCode>1</statusCode>
<statusString>OK</statusString>
<subStatusCode>ok</subStatusCode>
</ResponseStatus>
To start, make sure your audio file compression matches what your camera is set to. In your example it is set to AAC. My example is based on ulaw, which is needed to work with iphone/ipad. You can see at the beginning where I do the conversion using ffmpeg.
curl --digest "http://$USER:$PASS@192.168.1.100/ISAPI/System/TwoWayAudio/channels"
like how do i start on doing this? sorry im new to all this stuff so can you pls elaborate?
So to start where would i create a ulaw file? "Make an 8k ulaw audio file:" is it a call service on home assistant? or do i have to install a program on my computer?
or do i have to install a program on my computer?
If the answer I provided above does not make immediate sense to you, it may be that you are out of your depth.
Anyhow...
Step 1: Get a bash shell open on a linux system and have installed/understand the commands "ffmpeg" and "curl"
Step 2: Follow the steps I already listed...
or do i have to install a program on my computer?
If the answer I provided above does not make immediate sense to you, it may be that you are out of your depth.
Anyhow...
Step 1: Get a bash shell open on a linux system and have installed/understand the commands "ffmpeg" and "curl"
Step 2: Follow the steps I already listed...
ok after some research i manage to step 2-4 using home assistant SSH addon and here is sample if the screenshot.
what i don't understand is step #1 which is make a 8k ulaw audio file.. and where to put the converted file.. what i did is convert a mp3 to ulaw using some online website converter.. after the conversion my mp3 file became a wav file not ulaw and i don't understand why. Also where would i put this command in step #1? "ffmpeg -i duck.wav -ac 1 -ar 8000 -f mulaw duck.ulaw"
i know i don't know much of this but im willing to learn.. thanks for guiding..
ok i manage to convert my audio to ulaw using ffmpeg.. but after trying to send the audio to my camera i got this error..
➜ ~ curl --digest -X PUT -H "Content-Length: 0" -H "Content-Type: application/octet-stream" -d @ding_dong.ulaw "http://admin:password@192.168.0.202/ISAPI/System/TwoWayAudio/channels/1/audioData" curl: Failed to open ding_dong.ulaw curl: option -d: error encountered when reading a file curl: try 'curl --help' or 'curl --manual' for more information
Maybe i have not declare where my file directory is?? I'm confuse how will the command know where my converted ulaw audio is. By the way i paste the ding_dong.ulaw in my home assistant config directory /upload/ folder
It sounds like ding_dong.ulaw isn't in the directory you are running curl from. Try giving it the full path to the file.
If your configuration is similar to mine, the correct command should be:
curl --digest -X PUT -H "Content-Length: 0" -H "Content-Type: application/octet-stream" -d @/config/upload/ding_dong.ulaw "http://admin:password@192.168.0.202/ISAPI/System/TwoWayAudio/channels/1/audioData"
Don't forget you need to make sure the connection is closed and then have to "open" the connection before sending the file:
curl --digest -X PUT "http://admin:password@192.168.0.202/ISAPI/System/TwoWayAudio/channels/1/close"
curl --digest -X PUT "http://admin:password@192.168.0.202/ISAPI/System/TwoWayAudio/channels/1/open"
i tried your code above and indicate the audio file directory.. but after i hit enter still no ding dong from the camera.. and it has no reply unlike the closing and opening command.. is it normal? does it mean it went thru? how can i debug if the audio is sent?
I do have the same issue, no sound from the camera at all after sending 8hz ulaw.
Are your cameras configured to use ulaw?
Yes they are. But in fact I am having it working when I send the ISAPI:// url directly to the camera instead of the nvr to the 65001 port which the nvr forwards.
OP here.
I decided to give this another look due to the new attention this thread has received. I'm still unable to get this to work. Here's some logs at a pretty in-depth level from go2rtc:
Here we have the camera config in go2rtc's config file:
api:
tls_listen: ":443"
tls_cert: | # default "", PEM-encoded fullchain certificate for HTTPS
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
...And a key here too - this part works
streams:
DenLSU_sub_new:
- "rtsp://admin:Password@camera_ip:554/Streaming/Channels/101"
#- "ffmpeg:DenLSU_sub#audio=opus"
- "isapi://admin:Password@camera_ip:80/"
webrtc:
candidates:
- 192.168.0.10:8555
ffmpeg:
bin: ffmpeg
log:
level: info # default level
api: trace
exec: debug
ngrok: info
rtsp: trace
streams: error
webrtc: fatal
I've of course got the camera itself set to AAC for audio, though I don't see any options for audio inputs I might add. It doesn't work in my browser for 2 way audio (my ultimate goal here), and it doesn't work with a local .aac audio file. That's what the logs are showing above.
Any ideas? Thanks.
Do you use an nvr or camera direct ?
Do you use an nvr or camera direct ?
Actually i use a DVR connected to my home network.. microphone is connected to dvr.. camera is connected to dvr and speaker is connected to dvr too. I can do two way talk using frigate built on go2rtc.. but i cannot make send audio file to camera to work..
I made a automation than when someone press the doorbell button.. camera would popup on my wall tablet.. this works fine. What i want to add is a chime or a pre recorded audio to play on the speaker everytime the doorbell is press.. so the person waiting outside will knew it went thru.. hope you get the idea..
Try to set isapi url directly to camera instead of nvr. If you use camera virtual host you can set url to port 65001 for camera 1 etc.
Do you use an nvr or camera direct ?
If you were asking me, I don't use hardware NVR. As per the config above, you can see I am connecting to the camera directly. I'm not getting any luck.
Try to set isapi url directly to camera instead of nvr. If you use camera virtual host you can set url to port 65001 for camera 1 etc.
Its an analog camera with BNC connector.. there's no setting on the camera itself.. all settings is made thru the DVR. All the camera does is send video feed to the DVR.
Hi all...I don't have the bandwidth to breakdown anyone's config or do any step-by-step diagnostics, but here are some tips that got me up and running:
Make sure your camera is using ulaw if you want to use webrtc. WebRTC requires g.711u (a.k.a. ulaw a.k.a. PCMU/PCMA). WebRTC is required for iphone if you want audio. (Ref: https://docs.frigate.video/configuration/live/) Example:
If you also want MSE, then use ffmpeg to make AAC audio available (MSE requires AAC). MSE is supported in browsers and may be faster depending on your setup. I find MSE faster than webrtc on non-mobile devices.
To do low-level debug, use tcpdump to capture your traffic and then open it in wireshark. you can then see what is succeeding and what is failing
You may want to consider pinning your go2rtc to v1.9.1. I have been esperiencing serious issues with go2rtc > 1.9.1 but do not know the cause. YMMV
Example config that works with go2rtc v1.9.1 using Frigate HASS Card and Hikvision DS-2CD2386G2-ISU/SL, with the Frigate HASS card using "Front_Door-Intercom":
Front_Door-Detect:
- rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/102
Front_Door-Record:
- rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/101
- ffmpeg:Front_Door-Record#audio=aac
Front_Door-Live:
- rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/102
- ffmpeg:Front_Door-Live#audio=aac
Front_Door-Intercom:
- rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/102
- isapi://<USER>:<PASS>@192.168.1.10:80/
Hi all...I don't have the bandwidth to breakdown anyone's config or do any step-by-step diagnostics, but here are some tips that got me up and running:
1. Make sure your camera is using ulaw if you want to use webrtc. WebRTC requires g.711u (a.k.a. ulaw a.k.a. PCMU/PCMA). WebRTC is required for iphone if you want audio. (Ref: https://docs.frigate.video/configuration/live/) Example: ![ulaw](https://private-user-images.githubusercontent.com/852886/346748221-803d73ae-9fb4-46a7-ae3c-c8dfbe61c6c0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA3MjA4NDUsIm5iZiI6MTcyMDcyMDU0NSwicGF0aCI6Ii84NTI4ODYvMzQ2NzQ4MjIxLTgwM2Q3M2FlLTlmYjQtNDZhNy1hZTNjLWM4ZGZiZTYxYzZjMC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNzExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDcxMVQxNzU1NDVaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT00YjA1ODI3MjJiMmE1MmRhMzZkZTQ2MTYxMTljZWQ0ZGViZjJlYzhlNjVmOTc5MGRkMzY4ZjVmNDY0YjI0ZWI3JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.7c_CY0RAlkbNapVWgDzHlqBZ_yXOnMjCHWyuhQDSFNQ) 2. If you also want MSE, then use ffmpeg to make AAC audio available (MSE requires AAC). MSE is supported in browsers and may be faster depending on your setup. I find MSE faster than webrtc on non-mobile devices. 3. To do low-level debug, use tcpdump to capture your traffic and then open it in wireshark. you can then see what is succeeding and what is failing 4. You may want to consider pinning your go2rtc to v1.9.1. I have been esperiencing serious issues with go2rtc > 1.9.1 but do not know the cause. YMMV
Example config that works with go2rtc v1.9.1 using Frigate HASS Card and Hikvision DS-2CD2386G2-ISU/SL, with the Frigate HASS card using "Front_Door-Intercom":
Front_Door-Detect: - rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/102 Front_Door-Record: - rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/101 - ffmpeg:Front_Door-Record#audio=aac Front_Door-Live: - rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/102 - ffmpeg:Front_Door-Live#audio=aac Front_Door-Intercom: - rtsp://<USER>:<PASS>@192.168.1.10:554/Streaming/Channels/102 - isapi://<USER>:<PASS>@192.168.1.10:80/
WebRTC doesn't require g.711u. It typically uses Opus for audio, and even supports a handful of other codecs too. You can specify an ffmpeg audio codec change within go2rtc for whatever desired codec you wish to use on the user facing front. The camera might require something specific for 2 way audio support, but that is entirely independent of WebRTC. Same setup with MSE as I understand it, but with different codec support.
Either way, the codec I'm using doesn't seem to change whether my cameras work or not. A packet inspection isn't going to change much either, since via the HTTP PUT uploads, the camera is clearly receiving the packets and providing an "ok" response. I know I won't be buying any more of these cameras in the future where I need 2 way audio support. They're pieces of junk, software/firmware wise.
I'll probably look back to Dahua, and see if they have a large sensor product like their 5442's but now with ONVIF 2 way audio support.
I just screwed around with it some more, and if I set my codecs in use on my camera to ulaw (I tried this before, but not in combination with the following steps), then set my go2rtc config as follows:
- "rtsp://admin:Password@IP:554/Streaming/Channels/101"
- "isapi://admin:Password@IP:80/"
- "ffmpeg:LSU_Testing#audio=ulaw"
Then if I start the 2 way audio stream via go2rtc's interface, and THEN I send an HTTP PUT to close and then open the audio, go2rtc's 2 way audio will work.
I then rebooted my cameras, and they seem to work again? It's so weird that the PUT commands would return a status of "ok" when they clearly weren't in a state to receive audio. Yet another broken aspect of these cameras.
I guess they were stuck in a closed state, somehow? It seems to work now after rebooting the cameras again. Firmware versions 5.7.3 and 5.7.13.
Edit: specifying - "ffmpeg:LSU_Testing#audio=opus"
also works for my cameras, with the camera's audio setting set to ulaw.
I'm wondering if it's a camera configuration issue on my end, somehow. However, it is present with TWO identical cameras, model: DS-2CD2387G2-LSU/SL
I had it working some months ago on this camera and it broke after some configuration change between either my camera and go2rtc. Tested on Go2rtc versions 1.6.2 and 1.8.5 (latest)
go2rtc.yaml:
I have a different (not Hikvision / ISAPI) camera where 2 way audio works fine over ONVIF Profile T via go2rtc. I can't figure out what I'm doing wrong or where to start troubleshooting this. Sending it audio clips via something like
ffmeg:https://samplelib.com/lib/preview/mp3/sample-3s.mp3#audio=pcma#input=file
also doesn't work (though, I can't seem to get this to work on the working 2 way audio camera either).There are no logs that I can find anywhere in relation to this problem. What can I do to troubleshoot this? I'd really like to have this working. Thanks.