CasparCG / server

CasparCG Server is a Windows and Linux software used to play out professional graphics, audio and video to multiple outputs. It has been in 24/7 broadcast production since 2006. Ready-to-use downloads are available under the Releases tab https://casparcg.com.
GNU General Public License v3.0
904 stars 269 forks source link

Ubuntu Linux desktop environment / window / loginmanager #914

Open walterav1984 opened 6 years ago

walterav1984 commented 6 years ago

On @dotarmin suggestion https://github.com/CasparCG/server/issues/913#issuecomment-371120449 a placeholder for discussing which desktop environment / window / login-manager may have performance/stability and or debugging ease compared to bloated/instable GPU dependent desktop environments vs serverbuilds with xorg openbox and lightdm? Both for building and running casparcg-server.

The reason is this comment from @Julusian https://github.com/CasparCG/server/issues/877#issuecomment-368837920 and my last 10 years of "personal" linux desktop experience.

and I'm pretty sure that Unity was using 15-20% of the GTX960 just idle. I ended up swapping Unity out for Openbox, which cut that down to <5%.
#877 (comment)

This example may just be related to CPU usage but when installing "openbox" on your default Ubuntu Desktop and logout from your current Desktop environment and log in again with "openbox" especially after a reboot will give a lot lower memory and cpu footprint. Using a ubuntu server install with just xorg and openbox on top decreases footprints even more but needs more adjustment/setting up.

Another example is that there are still graphic related issues why some PC's/Notebooks (especially integrated GPU's) won't even boot a Ubuntu Desktop installer till its Desktop because Unity/Gnome3 GPU dependent desktop freezes/crashes, while the Gnome2(MATE) or LXDE ubuntu flavor installs fine within the same Ubuntu release version. Although Ubuntu started more than 5 years ago with 11.10 using Unity making the Desktop GPU dependent, its still a show stopper for some 2018.

Using a official flavour of Ubuntu will also have technical reasons besides personal ones, we should focus on the technical side and index the useful differences while keeping diversity for debugging as low as possible.

https://www.ubuntu.com/download/flavours

Eventually these Desktop-Environments / Window managers / etc challenges will be of no concern when the dependencies on xorg are gone and the ubuntu server edition/flavour will fit mininum needs?

dotarmin commented 6 years ago

Referencing issue #897 as "good to know".

walterav1984 commented 6 years ago

Referencing issue #897 as "good to know".

Indeed that may/will be the final destination but before that we need a environment which most of us can run and administer.

gizahNL commented 5 years ago

We use LXDE together with LXDM

Because we also use IPMI (which uses a dedicated onboard card) and we prefer terminal access we have hacked together deploy tooling which moves the IPMI console + keyboard to seat 0, creates an additional seat 1, moves the Caspar graphics card towards seat 1, binds it in Xorg.conf and then we only start Xorg on that seat with autologin enabled, starting caspar automatically on boot.

walterav1984 commented 5 years ago

Sounds interesting for a older IPMI based HP-DL380-G5.

@gizahNL would you share your "xorg.conf" and "lxdm.conf"?

Does lxde run on its own xorg on the ipmi graphics side?

Do you even use lxdm login for the caspercg server or run a single xorg window without a window manager?

gizahNL commented 5 years ago

@walterav1984 I'll share the relevant config files on Monday when I get into the office.

What we basically do is create some udev rules to move the dedicated graphics card into seat 1, and tell lxdm to run on it, binding Xorg to that graphics card. LXDM is configured to auto login to our playour user and we start Caspar in the Xorg session. The Xorg session is running headless without input devices or screens attached. That leaves the ipmi graphics card in seat 0 (the default seat), and Linux just spawns the normal ttys on it. The only "issue" is that Xorg switches to tty7 by default, so if we want to use the terminal we need to switch back (which does not interrupt the x session, as it would normally).

The only reason this seems to work is that, unlike lightdm, lxdm is not Multiseat aware and allows hardcoded Xorg command line in the config. We tried lightdm but iirc it tries to spawn at all seats and did not Autologin on seat 1.

wifinityes commented 5 years ago

Hi @gizahNL,

Could you share your xorg and lxdm config? Im trying to run CasparCG in proxmox, do you think will be possibble using a similar setup as yours?

Thank you very much for your sharing.

gizahNL commented 5 years ago

@wifinityes sure! Here is what is currently in use, mind you, these are salt (orchestration tooling) files, so here and there there is some jinja templating.

What it does is look for an nvidia GPU, push it into seat1 and run Xorg bound to that GPU.

Xorg.conf:

{% set nvidia_pci_hex_id = salt['cmd.shell']('lspci | grep VGA | grep NVIDIA | awk -F " " \'{print $1}\'| awk -F . \'{print $1}\'') %}
{% set pci_id_parts = nvidia_pci_hex_id.split(':') %}
{% set busid = pci_id_parts[0]|int(base=16)|string+":"+pci_id_parts[1]|int(base=16)|string %}

Section "ServerLayout"
     Identifier     "Layout0"
     Screen      0  "Screen0" 0 0
     Option "Seat"  "seat-1"
     Option          "SingleCard" "on" 
    Option "StandbyTime" "0"
    Option "SuspendTime" "0"
    Option "OffTime" "0"
EndSection

Section "Files"
EndSection

Section "ServerFlags"
        Option "AutoAddGPU" "off"
EndSection

Section "Monitor"
     Identifier     "Monitor0"
     VendorName     "Unknown"
     ModelName      "Unknown"
     HorizSync       28.0 - 33.0
     VertRefresh     43.0 - 72.0
     Option         "DPMS"
     Option         "UseEDID" "FALSE"
EndSection

Section "Device"
     Identifier     "Device0"
     Driver         "nvidia"
     VendorName     "NVIDIA Corporation"
     BusId      "{{ busid }}"
     Option     "AllowEmptyInitialConfiguration"
     MatchSeat       "seat-1"
EndSection

Section "Screen"
     Identifier     "Screen0"
     Device         "Device0"
     Monitor        "Monitor0"
     DefaultDepth    24
     Option         "ConnectedMonitor" "CRT"
     Option         "IgnoreEDIDChecksum" "CRT"
     SubSection     "Display"
         Virtual     1440 900
         Depth       24
     EndSubSection
EndSection

udev rules file:

{% set nvidia_pci_hex_id = salt['cmd.shell']('lspci | grep VGA | grep NVIDIA | awk -F " " \'{print $1}\'| awk -F . \'{print $1}\'') %}
{% set pci_id_parts = nvidia_pci_hex_id.split(':') %}
{% set busid = pci_id_parts[0]|int(base=16)|string+":"+pci_id_parts[1]|int(base=16)|string %}
SUBSYSTEM=="drm", KERNEL=="card[0-9]*", ATTRS{vendor}=="0x10de", DRIVERS=="nvidia", TAG+="master-of-seat"
TAG=="seat", ENV{ID_FOR_SEAT}=="drm-pci-0000_{{ pci_id_parts[0] }}_{{ pci_id_parts[1] }}_0", ENV{ID_SEAT}="seat-1"

lxdm.conf:

[base]
## uncomment and set autologin username to enable autologin
 autologin=playout

## uncomment and set timeout to enable timeout autologin,
## the value should >=5
# timeout=10

## default session or desktop used when no systemwide config
session=/usr/bin/startlxde

## uncomment and set to set numlock on your keyboard
# numlock=0

## set this if you don't want to put xauth file at ~/.Xauthority
# xauth_path=/tmp

# not ask password for users who have empty password
# skip_password=1

## greeter used to welcome the user
greeter=/usr/lib/lxdm/lxdm-greeter-gtk

[server]
## arg used to start xserver, not fully function
# arg=/usr/bin/X -background vt1
# uncomment this if you really want xserver listen to tcp
# tcp_listen=1
arg=/usr/bin/X -seat seat-1 -config /etc/X11/xorg-seat1.conf

[display]
## gtk theme used by greeter
gtk_theme=Clearlooks

## background of the greeter
#bg=/usr/share/backgrounds/default.png
bg=/usr/share/images/desktop-base/login-background.svg

## if show bottom pane
bottom_pane=1

## if show language select control
lang=1

## if show keyboard layout select control
keyboard=0

## the theme of greeter
theme=Industrial

[input]

[userlist]
## if disable the user list control at greeter
disable=0

## whitelist user
white=

## blacklist
LRTNZ commented 4 years ago

Hi @gizahNL

I was wondering if you could explain how to use those files once they have been created? I get how the different parts are configured, but I am wondering how they are executed, as you mentioned that you are using saltstack? Any help would be greatly appreciated.

gizahNL commented 4 years ago

@LRTNZ these are just 'default' linux configuration files (udev rule, xorg config and lxdm config) so they are not executed in any way. The magic is that we move our Nvidia GPU unto seat 1 via the udev rule, leaving the IPMI GPU in seat 0 (which is the seat that has the terminals attached as well as the keyboard). The LXDM config modifies the xorg start command to make sure it starts in seat-1 and the xorg config makes sure it only uses our Nvidia card.

LRTNZ commented 4 years ago

@gizahNL Hey there! Thanks for the information on how it is done - However, I ended up coming up with my own way of getting it all to work. However, thanks for the pointer of LXDM - that's what I am using to kickstart the entire thing.

What I am doing, is using LXDM to automatically log in our Caspar user, then use the PostLogin Script to fire off a chain of scripts. The first one makes that there is not the flag file I am creating present, to avoid starting multiple instances automatically. If that file is not there, it will create it, then start up a detached screen, passing in a couple of scripts as the bash arguments. The first one will fire up the media scanner as a background job, save it's pid to a file, and wait 10 seconds (To allow for scanning of most of the library), then start up the server task. When this script finishes, which occurs when the caspar server task is ended, the second script that was passed as an argument will automatically run. This script ends the scanner instance using the pid saved to the file, and then deletes the flag file.

gizahNL commented 4 years ago

Why not create (a) systemd service(s) that depends on lxdm being started? That feels a lot simpler to me ;) (that's how we do it ;) )

gizahNL commented 4 years ago
[Unit]

Description=CasparCG Server
After=DesktopVideoHelper.service
After=graphical.target  lxdm.service network-online.target
Requires=lxdm.service

[Service]
User=playout
Type=simple
Restart=always
LimitRTPRIO=99
Environment="DISPLAY=:0"
Environment="LD_LIBRARY_PATH=/opt/casparcg2-3/lib"
ExecStart=/opt/casparcg2-3/bin/casparcg /etc/casparcg/casparcg.config

[Install]

WantedBy=default.target
gizahNL commented 4 years ago

(when starting this way diag is broken unless you patch caspar or place the font file in /)

Julusian commented 4 years ago

@gizahNL Does the font work if you add a WorkingDirectory=/opt/casparcg2-3 (might need /bin on the end too) this above the ExecStart line?

gizahNL commented 4 years ago

I tried that on a hacked 2.1 we have (hacked out bits that wait for terminal input which makes it crash under systemd, 2.2/3 'just works') however iirc it tries to use all configured paths from the config file relative to it, thus breaking our setup. [edit] relative to it -> it being the working directory ;)

LRTNZ commented 4 years ago

Why not create (a) systemd service(s) that depends on lxdm being started? That feels a lot simpler to me ;) (that's how we do it ;) )

Because A) I have never done anything like that and wouldn't have been confident enough to, and B) lxdm has the post login script calling inbuilt. The files are there next to the lxdm config file, and are dead easy to tap into - it is the way various places seem to recommend running scripts after startup on lxdm. C) by putting Casper into a screen, it is easy to login from remote to see what it is doing, and enter commands and the like. D) I didn't think of it šŸ˜‚

LRTNZ commented 4 years ago
[Unit]

Description=CasparCG Server
After=DesktopVideoHelper.service
After=graphical.target  lxdm.service network-online.target
Requires=lxdm.service

[Service]
User=playout
Type=simple
Restart=always
LimitRTPRIO=99
Environment="DISPLAY=:0"
Environment="LD_LIBRARY_PATH=/opt/casparcg2-3/lib"
ExecStart=/opt/casparcg2-3/bin/casparcg /etc/casparcg/casparcg.config

[Install]

WantedBy=default.target

This vs:

PostLogin sh home/caspar/startup startup-wrapper.sh

Edit: I was going to add the scripts here, but that will have to wait until later today. Why does mobile code related stuff have to be so horrid to deal with??? Anyways, I will add the bash scripts I created in a comment later today, when I can do it with some sanity on my PC.

Also I will add, it was a very long day of trying to start x11 without a desktop environment in scripts to no avail, Getty autologin was causing havoc etc. So getting it to work the way I will show in the scripts - yeah ok it was a case of if it works, don't touch šŸ˜.

LRTNZ commented 4 years ago

@gizahNL @Julusian these are the scripts I setup to startup Caspar automatically:

PostLogin - The LXDM script that automatically gets called after login:

#!/bin/sh
cd //
sh home/caspar/startup/startup-wrapper.sh

startup-wrapper - The script that handles checking for the flag file, and starting caspar in the detached screen:

#!/bin/bash -e

logger "Loaded Caspar Startup Wrapper"
echo $HOME
if [ -e $HOME/.runCaspar ]
then
    logger "Caspar already running"
    beep -f 200 -l 250
else
    logger "Attempting to load caspar startup"
    beep -f 2500 -l 100
    touch $HOME/.runCaspar
    cd //
    cd home/caspar/startup
    screen -d -m -S caspar bash -c './start-caspar.sh; ./end_server.sh; exec bash' 
fi

start-caspar - The first script that the new screen executes when it is opened:

#!/bin/bash -e

cd $HOME
clear >$(tty)
cd caspar-server
echo 'Starting Scanner'
./scanner &
echo "$!" > scan_pid
sleep 10
echo 'Starting Caspar'
beep -f 2500
./run.sh

The sleep of 10 seconds is to allow for the scanner to finish it's work on a reasonably sized library, before the server process is started up.

__end_server__ - The script that the screen will automatically call after the first script ends, which occurs when the caspar server ends:

#!/bin/bash
echo "End server Called"
cd ../caspar-server/
echo Killing "$(<scan_pid)"
kill -9 "$(<scan_pid)"
cd ..
rm .runCaspar
echo "Caspar Server Process Tidied Up"

While it may not be as pretty as using a systemd process, the nice thing about it is that the Caspar process gets put into a detached screen - meaning that it is quite easy to jump in via SSH and see what it is doing live, enter commands directly, and to close it down nicely. It also just works with Caspar straight out of the box, so we didn't need to modify it to remove the terminal input stuff, in order for it to function. It also never seemed to have any problems with the directories, and where things were being called from - as it was all technically being started by the caspar user on the server.

I will also add I have the beeps in there from when I was testing the scripts - and they remained useful, so I kept them in there. As I was wanting to see how it all behaved from startup, I wanted a way to know what was going on with the scripts being fired off in the background. And the beep was the easiest thing to use to achieve that.

gizahNL commented 4 years ago

We've also played with scripts like that, the biggest turnoff though is that if Caspar happens to crash the scripts won't bring it back up. This has been an issue because Caspar 2.1 chokes on VFR files that manage to slip through ;) Our setup is fully automated though, once Caspar is set up and our command and control software installed no hands touch the installation.