Closed mercuryyy closed 2 months ago
You can fork audio stream using config verb and redirect it to jambonz-api-server which transcodes the steam into mp3/wav format and uploads it to the requested storage.
Thank you @avoylenko any chance for an example?
Build in recording is forked only if the call goes into 'In Progress' state, so your call might skip this state or already in this state when it hits your applications. The solution might be to inject the config verb manually, which does not block other verbs to proceed, in the beginning of you application flow like this:
{
"verb":"config",
"listen":{
"url":"JAMBONZ_API_SERVER/record/RECORDING_VENDOR",
"disableBidirectionalAudio":true,
"mixType":"stereo",
"passDtmf":true,
"wsAuth":{
"username": "record",
"password": "record"
}
}
}
where the RECORDING_VENDOR could be either aws_s3, s3_compatible, google or azure, based on your recording destination storage .
Thank you @avoylenko i am trying:
curl -X POST "https://xxxx.io/api/v1/Accounts/398961ca-6eac-482a-a1b3-xxxxx/Calls" \
-H "Accept: application/json" \
-H "Authorization: Bearer 1cf2f4f4-64c4-4249-9a3e-xxxxx" \
-H "Content-Type: application/json" \
-d '{"app_json": "[{\"verb\":\"config\",\"listen\":{\"url\":\"https://xxxx.io/api/v1/record/s3_compatible\",\"disableBidirectionalAudio\":true,\"mixType\":\"stereo\",\"passDtmf\":true,\"wsAuth\":{\"username\":\"record\",\"password\":\"record\"}}},{\"verb\":\"listen\",\"url\":\"ws://0.0.0.0:3599/connection/8b395fff-eeee-424f-b826-xxxxx/4ca2fb6a-8636-4f2e-96ff-xxxxxx/%2B195439xxx\",\"sampleRate\":16000,\"bidirectionalAudio\":{\"enabled\":true,\"streaming\":true,\"sampleRate\":16000}}]",
"call_hook": "https://public-apps.jambonz.us/hello-world",
"call_status_hook":"https://public-apps.jambonz.us/call-status",
"from": "+1561286xxxx",
"to": {
"type": "phone",
"number": "+195439xxxx"
},
"speech_synthesis_vendor": "google",
"speech_synthesis_language": "en-US",
"speech_synthesis_voice": "Wavenet-A",
"speech_recognizer_vendor": "Google",
"speech_recognizer_vendor": "google",
"speech_recognizer_language": "en-US"
}'
But it does not work, is the my order of verbs correct?
@mercuryyy you should not pass the app_json
property when creating a new call. it's not in the documentation, the property is injected by the api server when you pass application_sid
property only. I believe this this reason why automatic call recording is not started for your application.
Try like this, but set a correct id for the application_sid property:
curl -X POST "https://xxxx.io/api/v1/Accounts/398961ca-6eac-482a-a1b3-xxxxx/Calls" \
-H "Accept: application/json" \
-H "Authorization: Bearer 1cf2f4f4-64c4-4249-9a3e-xxxxx" \
-H "Content-Type: application/json" \
--data '{
"application_sid": "0d27aee6-d1e6-4707-bdde-1cc20b50ca88",
"from": "+1561286xxxx",
"to": {
"type": "phone",
"number": "+195439xxxx"
},
}'
See https://api.jambonz.org/#243a2edd-7999-41db-bd0d-08082bbab401 for more details.
@avoylenko So using the webhook and setting the verb like so:
[
{
"verb": "config",
"listen": {
"url": "https://xxxx.io/api/v1/record/aws_s3",
"bidirectionalAudio": {
"enabled": false,
"streaming": true,
"sampleRate": 16000
},
"mixType": "stereo",
"passDtmf": true,
"wsAuth": {
"username": "jambonz",
"password": "913b18ad-8ca9-45fe-9e7e-xxxxxx"
}
}
},
{
"verb": "listen",
"url": "ws://0.0.0.0:3599/connection/8b395fff-eeee-424f-b826-da530dad9b12/4ca2fb6a-8636-4f2e-96ff-8966c5e26f8e/+1954397380",
"sampleRate": 16000,
"bidirectionalAudio": {
"enabled": true,
"streaming": true,
"sampleRate": 16000
}
}
]
(tried many configurations)
There is this:
{
"level": 30,
"time": 1725395041112,
"pid": 16486,
"hostname": "xxxx",
"callId": "4c05689d-e4d5-123d-eca0-42010a8e0007",
"callSid": "f5265c81-dcbc-4367-9bb7-14740b77c123",
"accountSid": "2896c59b-d3fc-449b-9e9c-7432886029d6",
"callingNumber": "xxx-xxxx-com",
"calledNumber": "app-xxxx-017d-4f9e-bdf1-404d0414503e",
"traceId": "f6e13f04bcce743cd27b5730c65c4551",
"err": {
"type": "Error",
"message": "listenOptions: missing value for enable",
"stack": "Error: listenOptions: missing value for enable\n at validateVerb (/home/admin/apps/jambonz-feature-server/node_modules/@jambonz/verb-specifications/jambonz-app-json-validation.js:112:34)\n at validateVerb (/home/admin/apps/jambonz-feature-server/node_modules/@jambonz/verb-specifications/jambonz-app-json-validation.js:100:9)\n at makeTask (/home/admin/apps/jambonz-feature-server/lib/tasks/make_task.js:15:3)\n at /home/admin/apps/jambonz-feature-server/lib/middleware.js:404:66\n at Array.map (<anonymous>)\n at Object.invokeWebCallback [as handle] (/home/admin/apps/jambonz-feature-server/lib/middleware.js:404:51)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
},
"msg": "Error retrieving or parsing application: listenOptions: missing value for enable"
}
I also tried with
"disableBidirectionalAudio":true,
"mixType":"stereo",
"passDtmf":true,
There is some issue with listen used under config
Right, we are missing the enable
property. The correct way of setting the listen property for the config verb:
{
"verb": "config",
"listen": {
"url": "https://xxxx.io/api/v1/record/aws_s3",
"bidirectionalAudio": {
"enabled": false,
"streaming": true,
"sampleRate": 16000
},
"enable": true,
"mixType": "stereo",
"passDtmf": true,
"wsAuth": {
"username": "jambonz",
"password": "913b18ad-8ca9-45fe-9e7e-xxxxxx"
}
}
}
Ok that did the trick thank you so much for the follow up!
When "Record calls" is checked in "applications" settings is does not record the calls, but when checked "Record all calls for this account" Account settings it does.
We want to record call only per specific applications.
On another note is it possible to request the call be recorded when initiaing a call via