Open rastawarez opened 3 years ago
I restart the container and I found the same problem but now with the same stream. I will try to check media - MP3 files. And I will share them. NOTE The stream was play music at the start.
Type "help <command>" for more information.
END
main.queue
1 2 3 4
END
request.on_air
1
END
request.trace 1
[2021/04/10 08:29:21] Pushed ["annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3";...].
[2021/04/10 08:29:21] "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3" entered the secondary queue : position #2.
[2021/04/10 08:42:07] #1 in secondary queue
[2021/04/10 08:42:07] Entering the primary queue.
[2021/04/10 08:42:07] Resolving "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3" (timeout 20s)...
[2021/04/10 08:42:07] Pushed ["/nfsshare/mp3/130000/130804.MP3";...].
[2021/04/10 08:47:30] Currently on air.
END
Connection timed out.. Bye!
root@5443ddddd304:/opt/stream-producer# nc -U sockets/BOXPA1:xmx_19
request.on_air
1
END
request.trace 1
[2021/04/10 08:29:21] Pushed ["annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3";...].
[2021/04/10 08:29:21] "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3" entered the secondary queue : position #2.
[2021/04/10 08:42:07] #1 in secondary queue
[2021/04/10 08:42:07] Entering the primary queue.
[2021/04/10 08:42:07] Resolving "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3" (timeout 20s)...
[2021/04/10 08:42:07] Pushed ["/nfsshare/mp3/130000/130804.MP3";...].
[2021/04/10 08:47:30] Currently on air.
END
Connection timed out.. Bye!
root@5443ddddd304:/opt/stream-producer# nc -U sockets/BOXPA1:xmx_19
request.on_air
1
END
request.trace 1
[2021/04/10 08:29:21] Pushed ["annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3";...].
[2021/04/10 08:29:21] "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3" entered the secondary queue : position #2.
[2021/04/10 08:42:07] #1 in secondary queue
[2021/04/10 08:42:07] Entering the primary queue.
[2021/04/10 08:42:07] Resolving "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=1001.02700000000000000000:/nfsshare/mp3/130000/130804.MP3" (timeout 20s)...
[2021/04/10 08:42:07] Pushed ["/nfsshare/mp3/130000/130804.MP3";...].
[2021/04/10 08:47:30] Currently on air.
END
^C
root@5443ddddd304:/opt/stream-producer# date
Sat Apr 10 09:11:39 UTC 2021
root@5443ddddd304:/opt/stream-producer#
Thanks for the report, will look at very soon. Do you have any log of the script too?
thank you I am going to collect logs and scripts. I found the same problem again in different stream.
END
10.autostart
on
END
main.primary_queue
2 7 3
END
request.trace 2
[2021/04/10 20:20:05] Pushed ["annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=163.00000000000000000000:/nfsshare/mp3/165000/165175.MP3";...].
[2021/04/10 20:20:05] "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=163.00000000000000000000:/nfsshare/mp3/165000/165175.MP3" entered the secondary queue : position #1.
[2021/04/10 20:23:46] Entering the primary queue.
[2021/04/10 20:23:46] Resolving "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=163.00000000000000000000:/nfsshare/mp3/165000/165175.MP3" (timeout 20s)...
[2021/04/10 20:23:46] Pushed ["/nfsshare/mp3/165000/165175.MP3";...].
[2021/04/10 20:26:36] Currently on air.
END
10.status
on
END
^C
root@91b25226ea14:/opt/stream-producer# date
Sun Apr 11 09:13:33 UTC 2021
root@91b25226ea14:/opt/stream-producer#
Thanks for the report, will look at it very soon. Do you have any log of the script too? Originally posted by @toots in https://github.com/savonet/liquidsoap/issues/1563#issuecomment-817151978
SCRIPT
#!/usr/bin/liquidsoap
%include "../configuration"
STREAM_ID=argv(1)
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
set("server.socket",true)
set("init.allow_root",true)
set("server.socket.path", "sockets/#{SOURCE_ID}:#{STREAM_PREFIX}#{STREAM_ID}")
set("init.concurrent",true)
set("scheduler.generic_queues",1)
set("scheduler.fast_queues",1)
set("scheduler.non_blocking_queues",1)
MOUNTHLS="#{MOUNT_PATH}/#{STREAM_PREFIX}#{STREAM_ID}"
MOUNT="#{STREAM_PREFIX}#{STREAM_ID}"
print(MOUNT)
print(MOUNTHLS)
def store_meta(m) =
b64=base64.encode(json_of(m))
system("redis-cli -u #{REDIS_SERVER_URI} PUBLISH metan #{b64}")
system("redis-cli -u #{REDIS_SERVER_URI} SET #{SOURCE_ID}-#{STREAM_ID} #{b64}")
system("redis-cli -u #{REDIS_SERVER_URI} PUBLISH #{SOURCE_ID}-#{STREAM_ID} #{b64}")
end
def add_meta1(m) =
[("streamid","#{STREAM_ID}")]
end
def add_meta2(m) =
[("sourcesrvid","#{SOURCE_ID}")]
end
def add_meta3(m) =
[("mount","#{MOUNT}")]
end
def apply_metadata(m)=
print(m)
end
s = request.queue(id="main")
s = map_metadata(add_meta1,s)
s = map_metadata(add_meta2,s)
s = map_metadata(add_meta3,s)
s = on_metadata(store_meta,s)
s = strip_blank(s)
s = amplify(3.0,s)
s = drop_metadata(s)
s = cue_cut(s)
s = crossfade(s)
adv = request.queue(id="adv")
adv = amplify(3.0,adv)
withadv = fallback(track_sensitive=false, [adv,s])
#withadv = smooth_add(delay=0.0, p=0.0, normal=s, special=adv)
#withadv= merge_tracks(withadv)
output.icecast(%mp3,host = STREAM_SERVER_URI,
port = STREAM_SERVER_PORT,password = STREAM_SERVER_PASS,
mount = MOUNT,description=SOURCE_ID,
genre = STREAM_SERVER_URL,
name= STREAM_ID,url = MOUNT,fallible=true,mksafe(withadv))
streams_info = [("aac_128",(128000,"mp4a.40.29","ts"))]
output.file.hls(MOUNTHLS,streams_info=streams_info,[("aac_128",(%ffmpeg(format="mpegts",ar=44100,channels=2,b="128k",codec="aac")))], mksafe(withadv)
thank you I am going to collect logs and scripts. I found the same problem again in different stream.
END 10.autostart on END main.primary_queue 2 7 3 END request.trace 2 [2021/04/10 20:20:05] Pushed ["annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=163.00000000000000000000:/nfsshare/mp3/165000/165175.MP3";...]. [2021/04/10 20:20:05] "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=163.00000000000000000000:/nfsshare/mp3/165000/165175.MP3" entered the secondary queue : position #1. [2021/04/10 20:23:46] Entering the primary queue. [2021/04/10 20:23:46] Resolving "annotate:liq_fade_in=5.,liq_fade_out=5.,liq_cue_in=0,liq_cue_out=163.00000000000000000000:/nfsshare/mp3/165000/165175.MP3" (timeout 20s)... [2021/04/10 20:23:46] Pushed ["/nfsshare/mp3/165000/165175.MP3";...]. [2021/04/10 20:26:36] Currently on air. END 10.status on END ^C root@91b25226ea14:/opt/stream-producer# date Sun Apr 11 09:13:33 UTC 2021 root@91b25226ea14:/opt/stream-producer#
LOG with 165175.MP3
https://www.dropbox.com/s/d2uhakltgqx2i9t/log-165175.MP3.txt?dl=0
Ha, yes, you need to make sure that you have multiple queues otherwise that can happen. Where did you get those values from?
Ha, yes, you need to make sure that you have multiple queues otherwise that can happen. Where did you get those values from?
If I understood right do you mean the following setting?
set("init.concurrent",true)
set("scheduler.generic_queues",1)
set("scheduler.fast_queues",1)
set("scheduler.non_blocking_queues",1)
I check the documentation. I switched from request.create to queue (managed my scripts). At the start, I had only one queue main after I added the second one adv. The first source main is playing music constantly and the second is for on-demand track adv queue. + I added also HLS output.
need I to add add_timeout to optimize scheduled task because adv queue is almost always empty and increase the number of queues ?!
Anyway, thank you for your response and time regards Rastislav
Ha, yes, you need to make sure that you have multiple queues otherwise that can happen. Where did you get those values from?
I tried to get queue size from main.queue not main.primary_queue. I fixed it. I increase max rids in my queue. I have confused because of my main.primary_queue contained rids 2 7 3 I predicted it is right because there are rids/tracks. I will see the result. On positive result, I will close this issue.
I removed those settings set("init.concurrent",true) set("scheduler.generic_queues",1) set("scheduler.fast_queues",1) set("scheduler.non_blocking_queues",1) I don't know if I need to use it or it is better to use default sets.
#!/bin/bash
# PROTO CALL
**Q_MAX=5 # I INCRESSED NUM TRACK IN QUEUE**
USOCK=$1
STREAM_NEXT_MEDIA_SCRIPT=$2
STREAM_ID=$3
echo " with prmaeters sok $1 script $2 streamid $3 "
function PushInMain {
echo " pushed in Q $1 in socket $2 "
for (( i = $1; i < Q_MAX; i++ )); do
NEXTSONG=$(eval $STREAM_NEXT_MEDIA_SCRIPT $STREAM_ID)
echo -e " !!!!!!! $NEXTSONG \n"
echo -e "main.push $NEXTSONG\nexit\n" | nc -U -w 5 $2
echo -e " !!!!!!!!!\n"
done
}
function GetMainQ {
q=( `echo -e "main.primary_queue\nexit\n" | nc -U -w 5 $1 2>/dev/null` ) **I FIXED SIZE QUEUE FROM main.primary not main.queue**
i=0
for element in "${q[@]}"
do
if [ ! -z "${element##*[!0-9]*}" ] ; then
#printf "$element_"
((i++))
fi
done
PushInMain $i $USOCK
}
GetMainQ $USOCK
I am not sure I am following the issue exactly, but I have just commited 6832f011b9da2ac241b6b154e5b4034d5d07ccef where we add native
parameter to use the native (= in Liquidsoap) implementation of queues. It might help here because it behaves more evently (no primary / secondary queues anymore).
It's always better to use default values. @smimram, the issue here is with the scheduler queues, though I'm happy to see the request queues being cleaned up.
I am not sure I am following the issue exactly, but I have just commited 6832f01 where we add
native
parameter to use the native (= in Liquidsoap) implementation of queues. It might help here because it behaves more evently (no primary / secondary queues anymore).
I am using docker image version 1.4.4 FROM savonet/liquidsoap:v1.4.4 and unixsocket to manage the queue. see console grab in this post.
the main id is my general source. My script gets the size of main.primary_queue if there are less than 5 files it will add additional files. I don't know exactly how liquidsoap consumes/manages memory. I decide to use queues and manage it via unixsocket with max 3-5 files in the "?processing?" queue. I had a mistake I got the size from the main. queue .The size of main.queue is more than 5 elements so my script did not push the new track into the queue. I decided to write queue controller in bash it is native for the server and it is good for prototyping. Maybe I will use golang latter to wrap it with some API.
THIS WAS MY BUG
function GetMainQ {
q=( `echo -e "main.queue\nexit\n" | nc -U -w 5 $1 2>/dev/null` )
| main.primary_queue
| main.push
console grab
root@3c8c1e2b6592:/opt/stream-producer# nc -U sockets/BOXPA1:xmx_19
help
Available commands:
| 19.autostart
| 19.metadata
| 19.remaining
| 19.skip
| 19.start
| 19.status
| 19.stop
| adv.consider
Type "help
I fixed this my script and streams working correctly HLS + Icecast Finally thank you for your support now
I found evening same problem ( silence) on few streams. The main source died. Again it was responding on socket connection with status on air, all variables telling me all is correct. I tried skipping a track, push the new track but with the same result. The source was been producing silence and nothing changed in variables. Still the same song frozen on_air.
When I pushed track into second source / second queue stream start to play pushed track. After the track finished stream again producing silence.
I REMOVED FOLLOWING LINES s = strip_blank(s) s = amplify(3.0,s) + i removed withadw source variable and using only s , adv
I started 2 dockers on 2 different hosts with different numbers of streams 23 and 61. After 14 hrs all streams playing correctly. I am going to make additional tests. So maybe strip_blank or amplify cause the source to produce silence.
s = request.queue(id="main") s = map_metadata(add_meta1,s) s = map_metadata(add_meta2,s) s = map_metadata(add_meta3,s) s = on_metadata(store_meta,s) #s = strip_blank(s)
#s = amplify(3.0,s) s = drop_metadata(s) s = cue_cut(s) s = crossfade(s) s= mksafe(s)
I can confirm the problem is caused by s = strip_blank(s)
docker liquidsoap 1.4.4
Thanks for investigating.
Hello,
My stream started to produce silence. Not only this particular stream sometimes also a different one. Additional 20 streams working correctly. I can see in log `[2021/04/08 12:16:24] Currently on air`_. HLS fragments are still produced ( so sound ) DATA COLLECTED Sat 10 Apr 2021 09:27:35 AM CEST
liquidsoap 1.4.4 ( docker )
DATA COLLECTED Sat 10 Apr 2021 09:27:35 AM CEST
I push another track manually but the queue is in the same state. I tried to skip, autorestart still the same result.
Liquid template
Do you have some idea what am I doing wrong? Maybe my set( settings?
anyway thank you regards Rastislav