Cambridge105 / liquidsoap-playout-machine

LiquidSOAP-based playout system for pre-recorded programmes
1 stars 0 forks source link

liquidsoap-playout-machine

Liquidsoap-based playout system for pre-recorded programmes

This repo builds and runs and on-demand playout machine for pre-recorded programmes. The code is designed to spin up the machine only when needed, then destroy it at the end of a period of recorded programmes.

Getting started

As written, the playout machine is designed to run in AWS EC2, with the machine starting and stopping whenever a recorded programme is required. While most of the code should also be fine with an on-site server, and I've deliberately avoided using DynamoDB databases, for example, some rewriting will be necessary. To run this machine, you will need to configure the following as well as the scripts in this repo (maybe one day I'll Terraform this but for now we've used the AWS console...):

Architecture diagram

Machine setup

The following tasks are performed by userdata.txt which should be part of the Launch Template. This runs automatically when the EC2 instance is created.

  1. Installs required packages, including by running opamstart.sh (See the Code Structure section, below)
  2. Clones this repo
  3. Gets the credentials and details of the studio stream from Parameter Store and writes them into the config.py file
  4. Sets up cron jobs, which will, each hour:
    • copy all today's files from the S3 bucket
    • run the join30MinFiles.py and checkFilePresent.py scripts (See the Code Structure section, below)
    • then run the makeSchedule.py script (See the Code Structure section, below)
  5. Ensures the machine's timezone is set to UK local time, respecting any DST offset
  6. Adds all of Rob's and my public keys from GitHub, in addition to the key specified in the Launch Template, so either of us can access the machine - Anyone else using this repo will need to add their own public keys instead!
  7. Updates DNS with the machine's public IP
  8. Gets the credentials to access the schedule Google Calendar from the Parameter Store and writes them to files
  9. Runs parseSchedule.py (See the Code Structure section, below)

The Auto-Scaling Group needs to be scheduled to start the instance in sufficient time to do all these userdata tasks before the first run of the cronjobs it creates. In our case, for a pre-record starting at 14:00, and therefore needing the cron jobs to start at 13:48, we start the instance at 13:25. The instance is stopped 5 minutes after the last pre-record finshes.

Code structure

The code is formed of a number of scripts:

Be aware that makeSchedule.py will disconnect from the Icecast server at the end of every programme and then reconnect if there is a programme following it, rather than maintaining the connection with silence. This is deliberate, since it means we do not need to have all the files in S3 before we start several back-to-back pre-records. In our case, a TOTH sequence and news bulletin run from xx:58:30 to xx:02:00, so we disconnect at xx:58:45 (forced by the maxduration in makeSchedule.py) and then reconnect at xx:00:00. The TOTH sequence has precidence in the streamer's Liquidsoap rules, so this is imperceptible to the listener.

Note on timers: Be aware that the hourly jobs created by userdata.txt are cron jobs as they can have best-effort timing. Programmes scheduled by makeSchedule.py use systemctl timers with 100 millisecond accuracy.

IAM role policy

The EC2 instance created by the Launch Template must have an IAM Role attached, with a Policy granting access to the following (obviously restrict the Resources as required):

Costs

AWS costs for running this infrastructure should be very small, but obviously if anyone else uses this code, cost optimisation is their responsibility. The following costs are expected:

My rough back-of-envelope calculation suggests this is a maximum of about $6/month in our use-case.