neutrinolabs / xrdp

xrdp: an open source RDP server
http://www.xrdp.org/
Apache License 2.0
5.67k stars 1.73k forks source link

[RFE] enforce single graphical session on systemd platforms #2491

Open akarl10 opened 1 year ago

akarl10 commented 1 year ago

since https://github.com/systemd/systemd/issues/24932 got rejected by @poettering himself I think the only longterm solution is that every display session manager implements a safeguard that prevents "launching" multiple graphical sessions and display a message to the user that he must log off the offending session (maybe also providing a option to force terminate the offending session)

Multiple graphical sessions per user are simply not possible with systemd. This is by design if this is not intentional then the broken by design (graphical-session.target being a user unit, in addition the assumption that the DBUS_SESSION_BUS_ADDRESS is session exclusive while systemd make this the same for every user session), but still it is not something we can fix

If someone has a better idea how to treat this issue, just propose that, but I think something has to be done given the amount of bug reports that pop up that the display keeps black

some findings:

loginctl show-user $UID

displays the active sessions. Locally created graphical sessions fill the DISPLAY property (xrdp does not do that, but maybe that is only possible if the session is local)

A similar thing should be done in sddm/gdm/... (out of scope for this report)

matt335672 commented 1 year ago

I agree. I think your systemd RFE illustrates that this isn't going to change for systemd-based systems.

The difficulty is where to put something like this. We can easily add something to startwm.shwhich implements what you suggest above. That solves the console login followed by xrdp login, but not the other way round.

A script in /etc/X11/xinit/xinitrc.d or /usr/etc/X11/xinit/xinitrc.d might be adequate for many systems. I'm welcome to suggestions, personally.

akarl10 commented 1 year ago

here a script that should work.

I put it in /etc/X11/Xsession.d/10enforce-single-graphical-session

but of course that is debian/ubuntu specific. Also this only works for X11 session. Xrdp should detect pre-existing wayland sessions, but the other way around probably does not work.

Good news: it seems all you need to do is stop graphical-session.target if it is running

if systemctl --quiet --user is-active graphical-session.target; then
  xmessage -button Logout:1,Terminate\ running\ graphical\ session:2 "Only a single graphical session is allowed.
If you terminate the current graphical session all unsaved data in that session will be lost"
  action=$?
  if [ $action -eq 2 ]; then
    systemctl --user stop graphical-session.target
    sleep 1
  else
    exit 0
  fi
fi
matt335672 commented 1 year ago

That's a great contribution - thanks. I like the use of graphical-session.target.

I've been looking for a hook to get into Wayland session startup, and so far I've had no luck.

There seems to be a lack of documentation on how exactly a Wayland session is started. I found this presentation, this link and there's a bit of info on the gnome-session manpage.

A bit of a hacky workaround (for GDM) is to use this to disable Wayland whern xrdp starts:-

/usr/libexec/gdm-runtime-config set daemon WaylandEnable false
akarl10 commented 1 year ago

I think wayland is directly started by startplasma-wayland or gnome-session. on KDE kwin_wayland is the "real" "X11 replacement" while in gnome gnome-session itself is that (there is no mutter, mutter is afaik "builtin")

in sddm this script is called: https://github.com/sddm/sddm/blob/develop/data/scripts/wayland-session.

The way it is called stays the same: the parameter is the "desktop", in plasma startplasma-wayland, gnome probably gnome-session

At least startplasma seems to check if the current dbus session has a org.freedesktop.systemd1 service and systemd startup is not explicitly disabled. If systemd is used then plasma-workspace-wayland.target is activated that in turn takes graphical-session.target up. Adding a check to see if graphical-session.target is already running should be fairly simple for them (the same is for plasma-workspace-x11.target) and would IMHO make a lot of sense. As far as I found on gitlab.gnome.org Gnome starts org.gnome.Shell@[wayland||x11]. I think putting a safeguard here would not be too difficult for them too, but I don't use gnome, so I will not look into that.

akarl10 commented 1 year ago

Good news for those who need multiple concurrent sessions for singe users on a systemd based system: I found out how to create a separate systemd-user instance per session. This way plasma or gnome can also launch its session with systemd multiple times. here two scripts that must be sourced before calling Xsession and after Xsession

prepare-multisession.sh

# workarounds for enabling multiple sessions when using systemd
# mainly the issue is that a desktop session needs a dedicated dbus,
# but systemd creates a single one for every logged on user.
# using systemd-run it is possible to create a separate systemd-user instance
# with dedicated XDG_RUNTIME_DIR and DBUS
# script by mwsys.mine.bz

if [ "$XDG_RUNTIME_DIR" = "/run/user/"`id -u` ]; then
  SESSION_RUNTIME_DIR=/run/user/`id -u`/xrdp-` echo $DISPLAY | sed 's/:/display-/g' `
  mkdir -p $SESSION_RUNTIME_DIR
  chmod 0700 $SESSION_RUNTIME_DIR
  # optional: mask specific units
  mkdir -p $SESSION_RUNTIME_DIR/systemd/user.control/
  ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.service
  ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.socket
  #start systemd service. this must be done using systemd-run to get a proper scope. This mimics user@.service
  systemd-run --user -u systemd-xrdp-`echo $DISPLAY | sed 's/:/display-/g'` -E XDG_RUNTIME_DIR=$SESSION_RUNTIME_DIR -E DBUS_SESSION_BUS_ADDRESS=unix:path="$SESSION_RUNTIME_DIR"/bus systemd --user
  export DBUS_SESSION_BUS_ADDRESS="unix:path="$SESSION_RUNTIME_DIR"/bus"
  export XDG_RUNTIME_DIR=$SESSION_RUNTIME_DIR
  if [ "$XDG_SESSION_ID" != "" ]; then
    # used by reconnect.sh
    echo $XDG_SESSION_ID > $SESSION_RUNTIME_DIR/login-session-id;
  fi
  #wait for dbus. this will be launched by systemd
  while [ ! -e $XDG_RUNTIME_DIR/bus ] ;do sleep 0.3; done

  trap ". /etc/xrdp/cleanup-multisession.sh" SIGTERM SIGINT

  #explicitly start pulseaudio.socket (not enable to not enable this user wide)
  test -e /lib/systemd/user/pulseaudio.socket && systemctl --user start pulseaudio.socket
  SESSION_RUNTIME_DIR=
fi

cleanup-multisession.sh

if [ "$XDG_RUNTIME_DIR" = "/run/user/"`id -u`"/xrdp-"` echo $DISPLAY | sed 's/:/display-/g' ` ]; then
        XDG_RUNTIME_DIR="/run/user/"`id -u` DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/"`id -u`/bus systemctl --user stop systemd-xrdp-` echo $DISPLAY | sed 's/:/display-/g' `
        rm -rf "$XDG_RUNTIME_DIR"
fi
matt335672 commented 1 year ago

Looks good.

When I played with this quite a while ago, I had significant problems with the GNOME lock screen, which is implemented by the display manager gdm3 rather than by a separate screen lock program. Comms for that go over the system D-Bus rather than the session one. There might be other interactions with the system as a whole (hot-plugging USB sticks?) which do interesting things.

I'll have a play with your scripts when I get a moment - they're certainly a step forward. Thanks.

akarl10 commented 1 year ago

systemd-run --user -E XDG_RUNTIME_DIR=$SESSION_RUNTIME_DIR -E DBUS_SESSION_BUS_ADDRESS=unix:path="$SESSION_RUNTIME_DIR"/bus systemd --user actually was my personal breakthrough. With this I have "full" systemd support in the session, logind knows that the session is remote, dbus-update-activation-environment goes to the right session and snap should work too (not tested yet since I usually apt purge it)

The only thing I have to check is how sesman terminates sessions "externally". Maybe I need to handle that signal (2 or 4) and cleanup the session systemd.

matt335672 commented 1 year ago

Thanks.

The current flow is as follows, but I expect you know most of this already.

sesman currently forks when it runs a session. The fork starts the X server, session manager and chansrv. It then waits for the session manager to finish, kills the X server and chansrv and exits. sesman picks up the SIGCHLD and clean up its own data.

The sesman fork needs to runs as root so it can call the PAM session cleanup stuff properly.

There are some significant problems with the above, mainly #1684 and #800. For what it's worth, I'm currently working on fixing these by changing the way sesman works (see #1961 and this personal wiki page), so there's scope for more flexibility if we need it.

akarl10 commented 1 year ago

@matt335672 I made some tests with kubuntu: apparmor might need some modifications in /etc/apparmor.d/abstractions/dbus-session-strict (addition of owner @{run}/user/[0-9]*/xrdp-display-*/bus rw same for /etc/apparmor.d/abstraction/audio owner @{run}/user/*/xrdp-display-*/pulse/ rw, owner @{run}/user/*/xrdp-display-*/pulse/{native,pid} rwk,

snap works, but usually the snap packager (most notably ubuntu devs themself) set PULSE_SERVER without checking if it was already set and point to the user wide daemon. also the start of the document portal fails because snap hardcoded /run/$UID/doc, but the session uses $XDG_RUNTIME_DIR/doc

screen lock and unlock works, even with loginctl lock/unlock-session $XDG_SESSION_ID. Since usually gnome people are first do deeply integrate with systemd things I guess it works there too

matt335672 commented 1 year ago

I've had a big play today with this.

The good news is it seems to be working with GNOME on Ubuntu 22.04.

I've currently got the following script in /etc/xrdp/start_private_systemd_user.sh. The change I've made to the original is to add a service to the systemd --user instance which waits for the session manager to exit, and then kills the systemd --user instance.

# workarounds for enabling multiple sessions when using systemd
# mainly the issue is that a desktop session needs a dedicated dbus,
# but systemd creates a single one for every logged on user.
# using systemd-run it is possible to create a separate systemd-user instance
# with dedicated XDG_RUNTIME_DIR and DBUS
# script by mwsys.mine.bz

if [ -z "$DISPLAY" ]; then
    echo "** Warning - no DISPLAY. Assuming test mode" >&2
    display_num=test
else
    display_num=${DISPLAY##*:}
    display_num=${display_num%.*}
fi
unit_name=xrdp-systemd-user-$display_num

SESSION_RUNTIME_DIR=/run/user/`id -u`/$unit_name
install -dm 0700 $SESSION_RUNTIME_DIR
rm -rf $SESSION_RUNTIME_DIR/systemd/user.control/
mkdir -p $SESSION_RUNTIME_DIR/systemd/user.control/

# optional: mask specific units
ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.service
ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.socket

# Create a unit to wait for this PID to finish
{
    echo "[Unit]"
    echo "Description=Wait for XRDP session to finish"
    echo "Requires=default.target"
    echo
    echo "[Service]"
    echo "Type=simple"
    echo "ExecStart=/bin/sh -c 'while /bin/kill -0 $$; do sleep 10; done'"
    echo "ExecStopPost=/usr/bin/systemctl --user exit"
} >$SESSION_RUNTIME_DIR/systemd/user.control/wait-for-xrdp-session.service

# start systemd service. this must be done using systemd-run to get a
# proper scope. This mimics user@.service
#
# Within the system --user process we run wait-for-xrdp-session.service. That
# kill the systemd --user instance when this process finishes
systemd-run --user -u $unit_name \
    -E XDG_RUNTIME_DIR=$SESSION_RUNTIME_DIR \
    -E DBUS_SESSION_BUS_ADDRESS=unix:path="$SESSION_RUNTIME_DIR"/bus \
    systemd --user --unit wait-for-xrdp-session.service

# Switch to the new systemd --user instance
export XDG_RUNTIME_DIR=$SESSION_RUNTIME_DIR
export DBUS_SESSION_BUS_ADDRESS="unix:path="$SESSION_RUNTIME_DIR"/bus"

if [ "$XDG_SESSION_ID" != "" ]; then
    # used by reconnect.sh
    echo $XDG_SESSION_ID > $SESSION_RUNTIME_DIR/login-session-id;
fi
#wait for dbus. this will be launched by systemd
while [ ! -e $XDG_RUNTIME_DIR/bus ] ;do sleep 0.3; done

#explicitly start pulseaudio.socket (not enable to not enable this user wide)
test -e /lib/systemd/user/pulseaudio.socket && systemctl --user start pulseaudio.socket

and in /etc/xrdp/startwm.sh at the top:-

# On systemd system?
if [ -x /usr/bin/systemctl -a \
    "$XDG_RUNTIME_DIR" = "/run/user/"`id -u` ]; then
    . /etc/xrdp/start_private_systemd_user.sh
fi

It's not a big change, but it might make system integration a bit easier.

As part of my day, I tried getting the startwm.sh script running as part of the systemd --user instance. It seemed like a good idea at the time, but it didn't go well, and I ended up abandoning it. It was just too complicated and difficult to debug.

I need to do a bit more testing, but that's for another day.

akarl10 commented 1 year ago

@matt335672 nice refactoring 👍

did not know that kill -0 works to check if a pid is still around. Usually I do that with test -d /proc/PID, but kill -0 is probably better since it does not have to traverse the proc filesystem.

I would lower the polling rate since a logout/login sequence is doable in 10 seconds. Maybe 2-5 seconds would be better?

sddm does not start the x-session-manager as a systemd unit, so I think this is not the way to go. Also the process would not belong to the correct pam session (systemd units are not part of pam sessions) so there might be some issues with logind later

matt335672 commented 1 year ago

I've been playing with creating a new mount namespace with a private /run/user/<uid> before the PAM session starts. The idea is that pam_systemd will then see an empty directory and create a new systemd --user instance.

Sadly the available system calls don't make it possible to share mountpoints in general but have a private /run/user/<uid>. The problem is that it's not possible to mount and set a private propagation type in one call - the mount needs to be created before it can be marked private. This leads to too many possible race conditions with the system mounting or umounting the global /run/user/<uid> directory.

@akarl10 - can I ask you about these sets of lines:-

# optional: mask specific units
ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.service
ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.socket

and

if [ "$XDG_SESSION_ID" != "" ]; then
    # used by reconnect.sh
    echo $XDG_SESSION_ID > $SESSION_RUNTIME_DIR/login-session-id;
fi

and

#explicitly start pulseaudio.socket (not enable to not enable this user wide)
test -e /lib/systemd/user/pulseaudio.socket && systemctl --user start pulseaudio.socket

Are these things you use in your own local setup?

akarl10 commented 1 year ago

I've been playing with creating a new mount namespace with a private /run/user/<uid> before the PAM session starts. The idea is that pam_systemd will then see an empty directory and create a new systemd --user instance.

Sadly the available system calls don't make it possible to share mountpoints in general but have a private /run/user/<uid>. The problem is that it's not possible to mount and set a private propagation type in one call - the mount needs to be created before it can be marked private. This leads to too many possible race conditions with the system mounting or umounting the global /run/user/<uid> directory.

Just out of curiosity: the private mount you would have be created directly by sesman? What if just a private /run/user would be created (without the uid)? There should not be no need for seeing other users in /run/user

@akarl10 - can I ask you about these sets of lines:-

# optional: mask specific units
ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.service
ln -s /dev/null $SESSION_RUNTIME_DIR/systemd/user.control/pipewire-pulseaudio.socket

and

if [ "$XDG_SESSION_ID" != "" ]; then
    # used by reconnect.sh
    echo $XDG_SESSION_ID > $SESSION_RUNTIME_DIR/login-session-id;
fi

and

#explicitly start pulseaudio.socket (not enable to not enable this user wide)
test -e /lib/systemd/user/pulseaudio.socket && systemctl --user start pulseaudio.socket

Are these things you use in your own local setup?

yes, this are private. If you are interrested my reconnect.sh looks like this

#!/bin/sh
# unlock session when reconnecting. works in conjunction with multisession workarounds

if [ -e /run/user/`id -u`/xrdp-` echo $DISPLAY | sed 's/:/display-/g' ` ]; then
  export XDG_RUNTIME_DIR=/run/user/`id -u`/xrdp-` echo $DISPLAY | sed 's/:/display-/g' `
  test -e $XDG_RUNTIME_DIR/login-session-id && loginctl unlock-session `cat $XDG_RUNTIME_DIR/login-session-id`
fi

This ensures that I don't have to enter my credentials twice when connecting to an idle connection and locked screen.

The pipewire-pulseaudio masking I did because else I would not get remote sound since there is no pipewire audio driver yet for xrdp

matt335672 commented 1 year ago

Yes, I'm doing it in sesman.

The problem with the mount API, is that the namespace is by default created shared (which is what we want). You can't then create a private mount on that - you have to create a shared mount and make it private. If we do that on /run/user, we'll wipe it out, and then make our own copy private. That breaks all the user sessions in the global namespace. There's MOVE functionality in mount, but that won't support this way of working.

I remember doing something very similar for the reconnect for CentOS 7 reconnects, back in my sysadmin days. It's a good point that reconnectwm.sh needs to know what has happened to XDG_RUNTIME_DIR, and that can be a challenge with what we're up to here.

I'll rework the above (again) to make it a bit more robust, generic and easy to test, with the intention of supporting your reconnectwm.sh functionality. We can then work out a way to get the other stuff in.

matt335672 commented 1 year ago

Here's the updated file. It's getting bigger, so it's an attachment.

systemd_user_context.sh.txt

It's now an executable script that takes a number of sub-commands.

Calling has changed. Some commands generate environment variables settings, and you parse these with exec, a bit like ssh-agent.

Top of startwm.sh looks like this, plus a stanza for your reconnectwm.sh

# On systemd system?
#
# If so, start a private "systemd --user" instance
if [ -x /usr/bin/systemctl -a "$XDG_RUNTIME_DIR" = "/run/user/"`id -u` ]
then
    eval "`${0%/*}/systemd_user_context.sh init -p $$`"

    # used by reconnect.sh
    if [ -n "$XDG_SESSION_ID" ]; then
        echo $XDG_SESSION_ID > $XDG_RUNTIME_DIR/login-session-id
    fi
fi

I've not tried this, but from your reconnectwm.sh, you should be able to do this:-

    # Connect to systemd --user instance
    eval "`${0%/*}/systemd_user_context.sh get`"

    if [ -f $XDG_RUNTIME_DIR/login-session-id ]; then
        loginctl unlock-session `cat $XDG_RUNTIME_DIR/login-session-id`
    fi

I've also found a potential problem. If I start an XRDP session with no console session active, and look at systemctl --user show-environment I get a shorter list than I get with systemctl --user show-environment when a console session is running. It looks like some environment variables from the console session systemd --user are bleeding over into the private one when it's started. On my system the extras are:-

If you've got any ideas about preventing this, let me know - I'm done for today.

akarl10 commented 1 year ago

wow, you made a whole tool out of it. Maybe if this gets adopted for other usecases too systemd-user nesting may get proper support from others like snap or default apparmor rules.

hmm, that's unfortunate..

with

unset_parent_env=$(IFS=$'\n'; for a in `systemctl --user show-environment `; do echo -n "-E ${a%%=*}= "; done)
systemd-run --user -u $unit_name $unset_parent_env \
            -E "XDG_RUNTIME_DIR=$session_runtime_dir" \
            -E "DBUS_SESSION_BUS_ADDRESS=unix:path=$session_runtime_dir/bus" \
            systemd --user --unit wait-for-xrdp-session.service

you could create a parameter list to unset every "main" systemd-user environment variable, or at least empty it. There seems to be no way to clear the environment of systemd --user environment that it inherited from its own exec

after changing to the new systemd-user with

(IFS=$'\n'; for a in `env`; do [ ${a%%=*} != "SYSTEMD_EXEC_PID" ] && systemctl --user set-environment ${a%%=*}; done)

you could import the current environment. There might be some issues for services that get started with an "empty" environment. Also having printenv saying something like

systemd-run --user -E SYSTEMD_EXEC_PID= --wait /bin/sh -c 'printenv;' sh[18664]: SYSTEMD_EXEC_PID= might be bad

matt335672 commented 1 year ago

I think we need something packaged up, so that people can retro-fit this to existing v0.9.x installations. I can see that for enterprise working where a developer may have a single account they use on a machine console, and also remotely from home this would be very useful indeed.

I like what you've done there - you've certainly got a better grasp of systemd than I have. I'll have a play and see what else I can find out.

matt335672 commented 1 year ago

Tool above added to https://github.com/matt335672/nest-systemd-user.

zhenrong-wang commented 1 year ago

Hi there, Here is my practice to restrict one session per user. It works well on CentOS Stream 9. Take a look at the thread below: https://github.com/neutrinolabs/xrdp/discussions/2319

By configuring the initialization of XRDP, I successfully avoided creating new (black-screen) sessions when a user changes the size and/or color depth from the RDP client - the previous session always gets updated automatically. Of course, there are drawbacks of my practice:

Anyway, this practice is very easy to implement, and solves the black screen problem without configuring sesman. No extra script is needed.

Hope it helps. Thanks!

issuefiler commented 1 year ago

@zhenrong-wang

https://github.com/neutrinolabs/xrdp/blob/d71ec3fed0d9a3f4acab14a92935c27e8e836abd/xrdp/xrdp.ini.in#L232-L258

Apparently it’s configuration under the [Xvnc] session type. Did you use Xvnc or Xorg (xorgxrdp) when connecting?

zhenrong-wang commented 1 year ago

I use Xvnc as the backend. Specifically, tigervnc-server. @matt335672