arut / nginx-rtmp-module

NGINX-based Media Streaming Server
http://nginx-rtmp.blogspot.com
BSD 2-Clause "Simplified" License
13.51k stars 3.52k forks source link

adding unique stream identifier beyond $name for exec_publish and exec_publish_done #537

Open rgbohner opened 9 years ago

rgbohner commented 9 years ago

Hello,

We have a system which requires communication to our database at the start and end of stream publishing. We also need to provide a unique identifier at the start and stop of streams so we can communicate to our database on which stream starts and stops. It is possible for our application to have the same stream ids run on the same ip addresses. So we need another identification mechanism beyond the $name and $addr arguments. Currently we are using the recorder's record_suffix to identify streams:

            # on publish
            exec_publish bash -c "script '$name' '$addr' &> log.log";

            # Recorder Section
            recorder myrecorder {
                record all;
                record_suffix -%b_%d_%y_%T.flv;
                record_path /usr/local/nginx/recordings;
                exec_record_done bash -c "script '$basename' &> log.log";
            }

However, we have applications which we would like to publish to, not record, yet still uniquely identify which streams start and stop. It seems exec_publish and exec_publish_done might provide what we need but it doesn't appear that the arguments provided to those directives allow specific identification of streams. Is there an argument which allows unique identification of streams outside of the $name argument? If not, do others believe it would be advantageous to get additional arguments for exec block of directives, such as $time (which the stream started) or $socketid (socket id of the particular stream).

Thanks

yverbin commented 9 years ago

use a combination of name,appname,clientid parameters, or hash of this combination as uniqueid.

gigablox commented 9 years ago

I second this. It's not so much the unique identifiers for each chunk but rather for the session as a whole. The unique ID for chunks in between on_publish and on_publish_done would be really helpful.

mhrubel commented 7 years ago

Hi, there, I am having some problems to implement exec_publish and exec_publish_done info about stream key into the database. My idea is to just run an online.sh file when the stream key is active and also run another offline.sh file when the stream was done publish. Also, I am not sure that my written two .bash script files are correct. I searched online over 10 days for this info. But, no luck. My MySQL command is correct. Because when I run this command directly on ubuntu server (14.04 and 16.04) terminal it works fine without any error.

MySQL command for Online: mysql -u streams -pdbpass streams -e "UPDATE stream_users SET user_keystatus = 1 WHERE stream_users. ID = 2"

MySQL command for Offline: mysql -u streams -pdbpass streams -e "UPDATE stream_users SET user_keystatus = 0 WHERE stream_users. ID = 2"

Mysql command update the database data from (user_keystatus) with 1 and 0

File: online.sh

`

!/bin/bash

mysql -u streams -pdbpass streams -e "UPDATE stream_users SET user_keystatus = 1 WHERE stream_users. ID = 2" `

File: offline.sh

`

!/bin/bash

mysql -u streams -pdbpass streams -e "UPDATE stream_users SET user_keystatus = 0 WHERE stream_users. ID = 2" `

So, I want to know how actually I can run those two .sh files when stream key starts publishing and then when to publish end.

I need an answer soon. So, please, if anyone knows that how can I run those two .sh files and what should the correct .sh files data for to work perfectly. I'm struggling for this almost 10 days now!

Best Regards Mahmudul Hasan Rubel Thanks.

sampleref commented 7 years ago

+1 Even I would like to see if there is a unique id kind of variable to sync between events like exec_publish, exec_publish_done and exec_record_done for each stream session