neutrinolabs / xrdp

xrdp: an open source RDP server
http://www.xrdp.org/
Apache License 2.0
5.6k stars 1.73k forks source link

Cannot conntct xrdp to tigervnc within 3000msec. #1744

Closed sakaia closed 3 years ago

sakaia commented 3 years ago

I cannot connect xrdp to tigervnc Is there any suggestion for digging the problem? From the code and behavior, it fails on trans_connect with 3000msec timeout.

From /var/log/xrdp.log

[20201208-09:44:24] [DEBUG] xrdp_wm_log_msg: connecting to sesman ip 127.0.0.1 port 3350
[20201208-09:44:25] [INFO ] xrdp_wm_log_msg: sesman connect ok
[20201208-09:44:25] [DEBUG] xrdp_wm_log_msg: sending login info to session manager, please wait...
[20201208-09:44:25] [DEBUG] return value from xrdp_mm_connect 0
[20201208-09:44:25] [INFO ] xrdp_wm_log_msg: login successful for display 1
[20201208-09:44:25] [DEBUG] xrdp_wm_log_msg: VNC started connecting
[20201208-09:44:25] [DEBUG] xrdp_wm_log_msg: VNC connecting to 127.0.0.1 5901
[20201208-09:44:28] [DEBUG] VNC error 1 after security negotiation
[20201208-09:44:28] [DEBUG] VNC error before sending share flag
[20201208-09:44:28] [DEBUG] VNC error before receiving server init
[20201208-09:44:28] [DEBUG] VNC error before receiving pixel format
[20201208-09:44:28] [DEBUG] VNC error before receiving name length
[20201208-09:44:28] [DEBUG] VNC error before receiving name
[20201208-09:44:28] [DEBUG] xrdp_wm_log_msg: VNC error - problem connecting
[20201208-09:44:28] [DEBUG] Closed socket 17 (AF_INET6 ::ffff:127.0.0.1 port 50704)
[20201208-09:44:28] [DEBUG] xrdp_wm_log_msg: some problem
[20201208-09:44:28] [DEBUG] xrdp_mm_module_cleanup
[20201208-09:44:28] [DEBUG] VNC mod_exit
[20201208-09:44:28] [DEBUG] Closed socket 16 (AF_INET6 ::1 port 57292)

My environment is

matt335672 commented 3 years ago

hi @sakala

Are you trying to connect to a VNC server which is already active? Or are you trying to create a VNC session?

I'm done for the day here, but I'll try to get back to you tomorrow.

sakaia commented 3 years ago

hello @matt335672 Thank you I am trying to create a VNC session.

I also doing on AWS EC2 with following instruction (with Ubuntu 18.04/20.04 LTS) but It fails on same error messsage. https://medium.com/tecxperiments/setting-up-rdp-in-an-ubuntu-aws-ec2-instance-b11044ca849b

matt335672 commented 3 years ago

Thanks. It's possible the desktop is exiting quickly. When this happens the X server is killed and this can produce similar results to the above.

I'll give you my standard instructions for getting a complete system log. We can then look into that for more information:-

  1. Log in to your machine using ssh.
  2. Make a note of the system time in hours, minutes and seconds (HH:MM:SS) with the date command
  3. Try to connect from your RDP client
  4. After the connection failure, use the command sudo journalctl -S HH:MM:SS (where HH:MM:SS is the time from step 2) to get a system log.
  5. Paste that here and we'll have a look at it.
sakaia commented 3 years ago

Thank you for your suggestion. I upload the log (follows). Any comments will be helpful.

p.s. After previous mention, xrdp on Amazon Linux2 works on following instruction, (still ubuntu does not work) https://qiita.com/98lerr/items/a7678db39a6519248f75

-- Logs begin at Mon 2020-09-14 02:48:05 UTC, end at Wed 2020-12-09 09:12:02 UTC. --
Dec 09 09:09:44 testserver kubelet[2761]: W1209 09:09:44.536664    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~secret/tls-assets and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:09:44 testserver kubelet[2761]: W1209 09:09:44.536925    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~secret/kube-prometheus-stack-1605-prometheus-token-5sb7z and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:09:44 testserver kubelet[2761]: W1209 09:09:44.537016    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~secret/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:09:44 testserver kubelet[2761]: W1209 09:09:44.538451    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~configmap/prometheus-kube-prometheus-stack-1605-prometheus-rulefiles-0 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:09:46 testserver kubelet[2761]: I1209 09:09:46.394174    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 048d949928c65d02eb995d4019588566424fba505aebe289bca00160f8603b51
Dec 09 09:09:46 testserver kubelet[2761]: E1209 09:09:46.394761    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:09:47 testserver kubelet[2761]: W1209 09:09:47.270400    2761 container.go:526] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862e84c0_9554_436e_a37d_6dccdbe160c8.slice/docker-d4b01f6a1376c7cf575b26fedf0ccecd8b092ee384acfefa4fbe9caef8a2a404.scope": error while getting gpu utilization: nvml: Not Supported
Dec 09 09:09:59 testserver kubelet[2761]: I1209 09:09:59.394340    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 048d949928c65d02eb995d4019588566424fba505aebe289bca00160f8603b51
Dec 09 09:09:59 testserver containerd[2555]: time="2020-12-09T09:09:59.570850578Z" level=info msg="shim containerd-shim started" address=/containerd-shim/598235a628d48b91ac35904b4a668c832d597df3017e490db108bcd70399764f.sock debug=false pid=77942
Dec 09 09:09:59 testserver systemd[1]: Started libcontainer container b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442.
Dec 09 09:10:00 testserver kubelet[2761]: W1209 09:10:00.016239    2761 container.go:526] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8c8fafa_31f7_4866_b547_0bf241c53541.slice/docker-b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442.scope": error while getting gpu utilization: nvml: Not Supported
Dec 09 09:10:00 testserver systemd[1]: docker-b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442.scope: Consumed 68ms CPU time
Dec 09 09:10:00 testserver containerd[2555]: time="2020-12-09T09:10:00.130563807Z" level=info msg="shim reaped" id=b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:10:00 testserver dockerd[3197]: time="2020-12-09T09:10:00.140710040Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 09 09:10:00 testserver kubelet[2761]: I1209 09:10:00.391070    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 048d949928c65d02eb995d4019588566424fba505aebe289bca00160f8603b51
Dec 09 09:10:00 testserver kubelet[2761]: I1209 09:10:00.391577    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:10:00 testserver kubelet[2761]: E1209 09:10:00.392122    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:10:01 testserver sshd[74672]: Received disconnect from 10.0.58.8 port 54117:11: disconnected by server request
Dec 09 09:10:01 testserver sshd[74672]: Disconnected from user testadmin 10.0.58.8 port 54117
Dec 09 09:10:01 testserver sshd[74257]: pam_unix(sshd:session): session closed for user testadmin
Dec 09 09:10:01 testserver systemd-logind[2253]: Removed session 34.
Dec 09 09:10:01 testserver systemd[1]: Stopping User Manager for UID 1000...
Dec 09 09:10:01 testserver systemd[74458]: Stopped target Default.
Dec 09 09:10:01 testserver systemd[74458]: Stopped target Basic System.
Dec 09 09:10:01 testserver systemd[74458]: Stopped target Paths.
Dec 09 09:10:01 testserver systemd[74458]: Stopped Pending report trigger for Ubuntu Report.
Dec 09 09:10:01 testserver systemd[74458]: Stopped target Timers.
Dec 09 09:10:01 testserver systemd[74458]: Stopped target Sockets.
Dec 09 09:10:01 testserver systemd[74458]: Closed GnuPG network certificate management daemon.
Dec 09 09:10:01 testserver systemd[74458]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Dec 09 09:10:01 testserver systemd[74458]: Closed REST API socket for snapd user session agent.
Dec 09 09:10:01 testserver systemd[74458]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Dec 09 09:10:01 testserver systemd[74458]: Closed GnuPG cryptographic agent and passphrase cache.
Dec 09 09:10:01 testserver systemd[74458]: Closed D-Bus User Message Bus Socket.
Dec 09 09:10:01 testserver systemd[74458]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
Dec 09 09:10:01 testserver systemd[74458]: Reached target Shutdown.
Dec 09 09:10:01 testserver systemd[74458]: Starting Exit the Session...
Dec 09 09:10:01 testserver systemd[74458]: Received SIGRTMIN+24 from PID 78060 (kill).
Dec 09 09:10:01 testserver systemd[1]: Stopped User Manager for UID 1000.
Dec 09 09:10:01 testserver systemd[1]: Removed slice User Slice of testadmin.
Dec 09 09:10:06 testserver kubelet[2761]: W1209 09:10:06.651211    2761 container.go:526] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95cd8cff_ef55_41cb_8e20_462995669fee.slice/docker-aee26e8e0751a16c7a25412b6aa5e066d56dbd29909e46e029d8e1bde6ca5a1a.scope": error while getting gpu utilization: nvml: Not Supported
Dec 09 09:10:09 testserver xrdp[44407]: (44407)(140214425134912)[INFO ] Socket 12: AF_INET6 connection received from ::ffff:10.87.120.42 port 54131
Dec 09 09:10:10 testserver xrdp[44407]: (44407)(140214425134912)[DEBUG] Closed socket 12 (AF_INET6 ::ffff:10.24.78.111 port 3389)
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[DEBUG] Closed socket 11 (AF_INET6 :: port 3389)
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[INFO ] Using default X.509 certificate: /etc/xrdp/cert.pem
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[INFO ] Using default X.509 key file: /etc/xrdp/key.pem
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[ERROR] Cannot read private key file /etc/xrdp/key.pem: Permission denied
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[DEBUG] TLSv1.2 enabled
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[DEBUG] TLSv1.1 enabled
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[DEBUG] TLSv1 enabled
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[DEBUG] Security layer: requested 11, selected 0
Dec 09 09:10:10 testserver xrdp[78314]: (78314)(140214425134912)[DEBUG] Closed socket 12 (AF_INET6 ::ffff:10.24.78.111 port 3389)
Dec 09 09:10:11 testserver xrdp[44407]: (44407)(140214425134912)[INFO ] Socket 12: AF_INET6 connection received from ::ffff:10.87.120.42 port 54132
Dec 09 09:10:11 testserver xrdp[44407]: (44407)(140214425134912)[DEBUG] Closed socket 12 (AF_INET6 ::ffff:10.24.78.111 port 3389)
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] Closed socket 11 (AF_INET6 :: port 3389)
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] Using default X.509 certificate: /etc/xrdp/cert.pem
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] Using default X.509 key file: /etc/xrdp/key.pem
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[ERROR] Cannot read private key file /etc/xrdp/key.pem: Permission denied
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] TLSv1.2 enabled
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] TLSv1.1 enabled
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] TLSv1 enabled
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] Security layer: requested 0, selected 0
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] connected client computer name: VPC-O08P1080113
Dec 09 09:10:11 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] Non-TLS connection established from ::ffff:10.87.120.42 port 54132: encrypted with standard RDP security
Dec 09 09:10:12 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_000131eb_wm_login_mode_event_00000001
Dec 09 09:10:12 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] Cannot find keymap file /etc/xrdp/km-e0010411.ini
Dec 09 09:10:12 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] Loading keymap file /etc/xrdp/km-00000411.ini
Dec 09 09:10:12 testserver xrdp[78315]: (78315)(140214425134912)[WARN ] local keymap file for 0xe0010411 found and doesn't match built in keymap, using local keymap file
Dec 09 09:10:13 testserver kubelet[2761]: I1209 09:10:13.394163    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:10:13 testserver kubelet[2761]: E1209 09:10:13.395078    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:10:24 testserver kubelet[2761]: I1209 09:10:24.394221    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:10:24 testserver kubelet[2761]: E1209 09:10:24.394808    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:10:26 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_wm_log_msg: connecting to sesman ip 127.0.0.1 port 3350
Dec 09 09:10:26 testserver xrdp-sesman[44383]: (44383)(140205018486080)[INFO ] A connection received from ::1 port 60056
Dec 09 09:10:26 testserver kubelet[2761]: W1209 09:10:26.493402    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/1c006eea-2179-4ecb-9aa0-5f1c8b31da7b/volumes/kubernetes.io~secret/kube-prometheus-stack-1605-operator-token-56zht and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:10:26 testserver kubelet[2761]: W1209 09:10:26.493451    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/1c006eea-2179-4ecb-9aa0-5f1c8b31da7b/volumes/kubernetes.io~secret/tls-secret and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:10:26 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] xrdp_wm_log_msg: sesman connect ok
Dec 09 09:10:26 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_wm_log_msg: sending login info to session manager, please wait...
Dec 09 09:10:26 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] return value from xrdp_mm_connect 0
Dec 09 09:10:26 testserver xrdp-sesman[44383]: (44383)(140205018486080)[INFO ] ++ created session (access granted): username testadmin, ip ::ffff:10.87.120.42:54132 - socket: 12
Dec 09 09:10:26 testserver xrdp-sesman[44383]: (44383)(140205018486080)[INFO ] starting Xvnc session...
Dec 09 09:10:26 testserver xrdp-sesman[44383]: (44383)(140205018486080)[DEBUG] Closed socket 9 (AF_INET6 :: port 5901)
Dec 09 09:10:26 testserver xrdp-sesman[44383]: (44383)(140205018486080)[DEBUG] Closed socket 9 (AF_INET6 :: port 6001)
Dec 09 09:10:26 testserver xrdp-sesman[44383]: (44383)(140205018486080)[DEBUG] Closed socket 9 (AF_INET6 :: port 6201)
Dec 09 09:10:26 testserver xrdp-sesman[44383]: (44383)(140205018486080)[DEBUG] Closed socket 8 (AF_INET6 ::1 port 3350)
Dec 09 09:10:26 testserver xrdp[78315]: (78315)(140214425134912)[INFO ] xrdp_wm_log_msg: login successful for display 1
Dec 09 09:10:26 testserver xrdp-sesman[78671]: (78671)(140205018486080)[INFO ] calling auth_start_session from pid 78671
Dec 09 09:10:26 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_wm_log_msg: VNC started connecting
Dec 09 09:10:26 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_wm_log_msg: VNC connecting to 127.0.0.1 5901
Dec 09 09:10:26 testserver xrdp-sesman[78671]: pam_unix(xrdp-sesman:session): session opened for user testadmin by (uid=0)
Dec 09 09:10:26 testserver systemd[1]: Created slice User Slice of testadmin.
Dec 09 09:10:26 testserver systemd[1]: Starting User Manager for UID 1000...
Dec 09 09:10:26 testserver systemd-logind[2253]: New session c10 of user testadmin.
Dec 09 09:10:26 testserver systemd[1]: Started Session c10 of user testadmin.
Dec 09 09:10:26 testserver systemd[78672]: pam_unix(systemd-user:session): session opened for user testadmin by (uid=0)
Dec 09 09:10:26 testserver systemd[78672]: Starting D-Bus User Message Bus Socket.
Dec 09 09:10:26 testserver systemd[78672]: Listening on REST API socket for snapd user session agent.
Dec 09 09:10:26 testserver systemd[78672]: Listening on GnuPG network certificate management daemon.
Dec 09 09:10:26 testserver systemd[78672]: Started Pending report trigger for Ubuntu Report.
Dec 09 09:10:26 testserver systemd[78672]: Reached target Paths.
Dec 09 09:10:26 testserver systemd[78672]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Dec 09 09:10:26 testserver systemd[78672]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Dec 09 09:10:26 testserver systemd[78672]: Listening on GnuPG cryptographic agent and passphrase cache.
Dec 09 09:10:26 testserver systemd[78672]: Reached target Timers.
Dec 09 09:10:26 testserver systemd[78672]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Dec 09 09:10:26 testserver systemd[78672]: Listening on D-Bus User Message Bus Socket.
Dec 09 09:10:26 testserver systemd[78672]: Reached target Sockets.
Dec 09 09:10:26 testserver systemd[78672]: Reached target Basic System.
Dec 09 09:10:26 testserver systemd[78672]: Reached target Default.
Dec 09 09:10:26 testserver systemd[78672]: Startup finished in 62ms.
Dec 09 09:10:26 testserver systemd[1]: Started User Manager for UID 1000.
Dec 09 09:10:26 testserver xrdp-sesman[78671]: (78671)(140205018486080)[DEBUG] Closed socket 7 (AF_INET6 :: port 3350)
Dec 09 09:10:26 testserver xrdp-sesman[78671]: (78671)(140205018486080)[DEBUG] Closed socket 8 (AF_INET6 ::1 port 3350)
Dec 09 09:10:26 testserver xrdp-sesman[78685]: (78685)(140205018486080)[INFO ] Xvnc :1 -auth .Xauthority -geometry 1920x1080 -depth 32 -rfbauth /home/testadmin/.vnc/sesman_testadmin_passwd:1 -bs -nolisten tcp -localhost -dpi 96
Dec 09 09:10:27 testserver xrdp-sesman[78671]: (78671)(140205018486080)[CORE ] waiting for window manager (pid 78684) to exit
Dec 09 09:10:27 testserver xrdp-sesman[78671]: (78671)(140205018486080)[CORE ] window manager (pid 78684) did exit, cleaning up session
Dec 09 09:10:27 testserver xrdp-sesman[78671]: (78671)(140205018486080)[INFO ] calling auth_stop_session and auth_end from pid 78671
Dec 09 09:10:27 testserver xrdp-sesman[78671]: pam_unix(xrdp-sesman:session): session closed for user testadmin
Dec 09 09:10:27 testserver xrdp-sesman[78671]: (78671)(140205018486080)[DEBUG] cleanup_sockets:
Dec 09 09:10:27 testserver xrdp-sesman[78671]: (78671)(140205018486080)[DEBUG] cleanup_sockets: deleting /var/run/xrdp/sockdir/xrdpapi_1
Dec 09 09:10:27 testserver xrdp-sesman[44383]: (44383)(140205018486080)[INFO ] ++ terminated session:  username testadmin, display :1.0, session_pid 78671, ip ::ffff:10.87.120.42:54132 - socket: 12
Dec 09 09:10:27 testserver systemd-logind[2253]: Removed session c10.
Dec 09 09:10:27 testserver systemd[1]: Stopping User Manager for UID 1000...
Dec 09 09:10:27 testserver systemd[78672]: Stopped target Default.
Dec 09 09:10:27 testserver systemd[78672]: Stopped target Basic System.
Dec 09 09:10:27 testserver systemd[78672]: Stopped target Sockets.
Dec 09 09:10:27 testserver systemd[78672]: Closed REST API socket for snapd user session agent.
Dec 09 09:10:27 testserver systemd[78672]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Dec 09 09:10:27 testserver systemd[78672]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Dec 09 09:10:27 testserver systemd[78672]: Stopped target Timers.
Dec 09 09:10:27 testserver systemd[78672]: Stopped target Paths.
Dec 09 09:10:27 testserver systemd[78672]: Stopped Pending report trigger for Ubuntu Report.
Dec 09 09:10:27 testserver systemd[78672]: Closed GnuPG cryptographic agent and passphrase cache.
Dec 09 09:10:27 testserver systemd[78672]: Closed D-Bus User Message Bus Socket.
Dec 09 09:10:27 testserver systemd[78672]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
Dec 09 09:10:27 testserver systemd[78672]: Closed GnuPG network certificate management daemon.
Dec 09 09:10:27 testserver systemd[78672]: Reached target Shutdown.
Dec 09 09:10:27 testserver systemd[78672]: Starting Exit the Session...
Dec 09 09:10:27 testserver systemd[78672]: Received SIGRTMIN+24 from PID 78735 (kill).
Dec 09 09:10:27 testserver systemd[1]: Stopped User Manager for UID 1000.
Dec 09 09:10:27 testserver systemd[1]: Removed slice User Slice of testadmin.
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] VNC error 1 after security negotiation
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] VNC error before sending share flag
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] VNC error before receiving server init
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] VNC error before receiving pixel format
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] VNC error before receiving name length
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] VNC error before receiving name
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_wm_log_msg: VNC error - problem connecting
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] Closed socket 17 (AF_INET6 ::ffff:127.0.0.1 port 53462)
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_wm_log_msg: some problem
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_mm_module_cleanup
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] VNC mod_exit
Dec 09 09:10:29 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] Closed socket 16 (AF_INET6 ::1 port 60056)
Dec 09 09:10:34 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] Closed socket 12 (AF_INET6 ::ffff:10.24.78.111 port 3389)
Dec 09 09:10:35 testserver xrdp[78315]: (78315)(140214425134912)[DEBUG] xrdp_mm_module_cleanup
Dec 09 09:10:35 testserver kubelet[2761]: I1209 09:10:35.394203    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:10:35 testserver kubelet[2761]: E1209 09:10:35.394806    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:10:39 testserver kubelet[2761]: W1209 09:10:39.440765    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/661e66cc-bfc9-4539-a599-4f690d39a43f/volumes/kubernetes.io~secret/kube-prometheus-stack-1605863637-prometheus-node-exporter-qjzc6 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:10:44 testserver kubelet[2761]: W1209 09:10:44.460618    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/cb7ad5f1-9df8-4d47-9124-3a3bb429f935/volumes/kubernetes.io~secret/kube-prometheus-stack-1605863637-kube-state-metrics-token-nscpx and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:10:46 testserver kubelet[2761]: I1209 09:10:46.394167    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:10:46 testserver kubelet[2761]: E1209 09:10:46.394728    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:10:51 testserver sshd[79118]: Accepted password for testadmin from 10.0.58.8 port 53539 ssh2
Dec 09 09:10:51 testserver sshd[79118]: pam_unix(sshd:session): session opened for user testadmin by (uid=0)
Dec 09 09:10:51 testserver systemd[1]: Created slice User Slice of testadmin.
Dec 09 09:10:51 testserver systemd[1]: Starting User Manager for UID 1000...
Dec 09 09:10:51 testserver systemd-logind[2253]: New session 37 of user testadmin.
Dec 09 09:10:51 testserver systemd[1]: Started Session 37 of user testadmin.
Dec 09 09:10:51 testserver systemd[79288]: pam_unix(systemd-user:session): session opened for user testadmin by (uid=0)
Dec 09 09:10:51 testserver systemd[79288]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Dec 09 09:10:51 testserver systemd[79288]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Dec 09 09:10:51 testserver systemd[79288]: Started Pending report trigger for Ubuntu Report.
Dec 09 09:10:51 testserver systemd[79288]: Reached target Paths.
Dec 09 09:10:51 testserver systemd[79288]: Listening on GnuPG cryptographic agent and passphrase cache.
Dec 09 09:10:51 testserver systemd[79288]: Starting D-Bus User Message Bus Socket.
Dec 09 09:10:51 testserver systemd[79288]: Listening on REST API socket for snapd user session agent.
Dec 09 09:10:51 testserver systemd[79288]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Dec 09 09:10:51 testserver systemd[79288]: Listening on GnuPG network certificate management daemon.
Dec 09 09:10:51 testserver systemd[79288]: Reached target Timers.
Dec 09 09:10:51 testserver systemd[79288]: Listening on D-Bus User Message Bus Socket.
Dec 09 09:10:51 testserver systemd[79288]: Reached target Sockets.
Dec 09 09:10:51 testserver systemd[79288]: Reached target Basic System.
Dec 09 09:10:51 testserver systemd[79288]: Reached target Default.
Dec 09 09:10:51 testserver systemd[79288]: Startup finished in 64ms.
Dec 09 09:10:51 testserver systemd[1]: Started User Manager for UID 1000.
Dec 09 09:10:53 testserver kubelet[2761]: W1209 09:10:53.495256    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/062ebc1d-1c99-4136-ad27-b7f5c7e38bfc/volumes/kubernetes.io~secret/tls-assets and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:10:53 testserver kubelet[2761]: W1209 09:10:53.495495    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/062ebc1d-1c99-4136-ad27-b7f5c7e38bfc/volumes/kubernetes.io~secret/config-volume and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:10:53 testserver kubelet[2761]: W1209 09:10:53.495593    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/062ebc1d-1c99-4136-ad27-b7f5c7e38bfc/volumes/kubernetes.io~secret/kube-prometheus-stack-1605-alertmanager-token-9hsx5 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:10:59 testserver kubelet[2761]: I1209 09:10:59.394385    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:10:59 testserver kubelet[2761]: E1209 09:10:59.395009    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:11:00 testserver kubelet[2761]: W1209 09:11:00.523068    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~secret/tls-assets and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:00 testserver kubelet[2761]: W1209 09:11:00.523377    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~secret/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:00 testserver kubelet[2761]: W1209 09:11:00.523544    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~secret/kube-prometheus-stack-1605-prometheus-token-5sb7z and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:00 testserver kubelet[2761]: W1209 09:11:00.524418    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/c699bc19-997b-4092-b326-65bd92a01f36/volumes/kubernetes.io~configmap/prometheus-kube-prometheus-stack-1605-prometheus-rulefiles-0 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:01 testserver kubelet[2761]: W1209 09:11:01.527674    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/540abda3-a420-412f-b72b-2de428b46b82/volumes/kubernetes.io~configmap/sc-dashboard-provider and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:01 testserver kubelet[2761]: W1209 09:11:01.527674    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/540abda3-a420-412f-b72b-2de428b46b82/volumes/kubernetes.io~secret/kube-prometheus-stack-1605863637-grafana-token-nl2xc and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:01 testserver kubelet[2761]: W1209 09:11:01.527851    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/540abda3-a420-412f-b72b-2de428b46b82/volumes/kubernetes.io~configmap/config and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:03 testserver kubelet[2761]: W1209 09:11:03.694858    2761 container.go:526] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862e84c0_9554_436e_a37d_6dccdbe160c8.slice/docker-d4b01f6a1376c7cf575b26fedf0ccecd8b092ee384acfefa4fbe9caef8a2a404.scope": error while getting gpu utilization: nvml: Not Supported
Dec 09 09:11:13 testserver kubelet[2761]: I1209 09:11:13.394378    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:11:13 testserver kubelet[2761]: E1209 09:11:13.394950    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:11:13 testserver sudo[79798]: testadmin : TTY=pts/0 ; PWD=/home/testadmin ; USER=root ; COMMAND=/bin/journalctl -S 09:09:42
Dec 09 09:11:13 testserver sudo[79798]: pam_unix(sudo:session): session opened for user root by testadmin(uid=0)
Dec 09 09:11:13 testserver kubelet[2761]: W1209 09:11:13.512376    2761 container.go:526] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95cd8cff_ef55_41cb_8e20_462995669fee.slice/docker-aee26e8e0751a16c7a25412b6aa5e066d56dbd29909e46e029d8e1bde6ca5a1a.scope": error while getting gpu utilization: nvml: Not Supported
Dec 09 09:11:25 testserver kubelet[2761]: I1209 09:11:25.394534    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:11:25 testserver kubelet[2761]: E1209 09:11:25.395224    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:11:34 testserver systemd[1]: Starting Cleanup of Temporary Directories...
Dec 09 09:11:34 testserver systemd[1]: Started Cleanup of Temporary Directories.
Dec 09 09:11:40 testserver kubelet[2761]: I1209 09:11:40.394137    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:11:40 testserver kubelet[2761]: E1209 09:11:40.394747    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:11:51 testserver kubelet[2761]: W1209 09:11:51.517960    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/661e66cc-bfc9-4539-a599-4f690d39a43f/volumes/kubernetes.io~secret/kube-prometheus-stack-1605863637-prometheus-node-exporter-qjzc6 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:52 testserver kubelet[2761]: W1209 09:11:52.521996    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/1c006eea-2179-4ecb-9aa0-5f1c8b31da7b/volumes/kubernetes.io~secret/tls-secret and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:52 testserver kubelet[2761]: W1209 09:11:52.522146    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/1c006eea-2179-4ecb-9aa0-5f1c8b31da7b/volumes/kubernetes.io~secret/kube-prometheus-stack-1605-operator-token-56zht and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:11:55 testserver kubelet[2761]: I1209 09:11:55.394519    2761 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b127310143b2740ba715b594332543453d3fdfe953606ad105d34a27cfd46442
Dec 09 09:11:55 testserver kubelet[2761]: E1209 09:11:55.395136    2761 pod_workers.go:191] Error syncing pod b8c8fafa-31f7-4866-b547-0bf241c53541 ("nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"), skipping: failed to "StartContainer" for "nvidia-device-plugin-ctr" with CrashLoopBackOff: "back-off 5m0s restarting failed container=nvidia-device-plugin-ctr pod=nvidia-device-plugin-1605865183-glk5x_kube-system(b8c8fafa-31f7-4866-b547-0bf241c53541)"
Dec 09 09:11:57 testserver sudo[79798]: pam_unix(sudo:session): session closed for user root
Dec 09 09:12:01 testserver kubelet[2761]: W1209 09:12:01.455759    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/062ebc1d-1c99-4136-ad27-b7f5c7e38bfc/volumes/kubernetes.io~secret/config-volume and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:12:01 testserver kubelet[2761]: W1209 09:12:01.455951    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/062ebc1d-1c99-4136-ad27-b7f5c7e38bfc/volumes/kubernetes.io~secret/tls-assets and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:12:01 testserver kubelet[2761]: W1209 09:12:01.456117    2761 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/062ebc1d-1c99-4136-ad27-b7f5c7e38bfc/volumes/kubernetes.io~secret/kube-prometheus-stack-1605-alertmanager-token-9hsx5 and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Dec 09 09:12:02 testserver sudo[81056]: testadmin : TTY=pts/0 ; PWD=/home/testadmin ; USER=root ; COMMAND=/bin/journalctl -S 09:09:42
Dec 09 09:12:02 testserver sudo[81056]: pam_unix(sudo:session): session opened for user root by testadmin(uid=0)
matt335672 commented 3 years ago

There does indeed appear to be a problem with your session starting. These two lines are telling me that:-

Dec 09 09:10:27 testserver xrdp-sesman[78671]: (78671)(140205018486080)[CORE ] waiting for window manager (pid 78684) to exit
Dec 09 09:10:27 testserver xrdp-sesman[78671]: (78671)(140205018486080)[CORE ] window manager (pid 78684) did exit, cleaning up session

This could be down to the X server rejecting connections, as well as for other reasons.

Can you tell me the followng:-

  1. Has the file /home/testadmin/.vnc/sesman_testadmin_passwd:1 been created by the login attempt?
  2. Is there a file /home/testadmin/.xsession-errors? If so, what is in it?
  3. What is the setting of DefaultWindowManager in /etc/xrdp/sesman.ini?
  4. If the setting above is /etc/xrdp/startwm.sh, what is the first line of /etc/xrdp/startwm.sh on your system?

Thanks.

sakaia commented 3 years ago

Ubuntu 20.04 on EC2 still have this problem. So I am writing with Ubuntu 20.04 log With changing instruction, VNC connection seems successfully but windows manager still shutdown shortly.

  1. Has the file /home/testadmin/.vnc/sesman_testadmin_passwd:1 been created by the login attempt?

Yes

  1. Is there a file /home/testadmin/.xsession-errors? If so, what is in it?

Following error appeared.

  1. What is the setting of DefaultWindowManager in /etc/xrdp/sesman.ini?

Yes

  1. If the setting above is /etc/xrdp/startwm.sh, what is the first line of /etc/xrdp/startwm.sh on your system?

Header line are follows

;; See `man 5 sesman.ini` for details

[Globals]
ListenAddress=127.0.0.1
ListenPort=3350
EnableUserWindowManager=true
; Give in relative path to user's home directory
UserWindowManager=startwm.sh
; Give in full path or relative path to /etc/xrdp
DefaultWindowManager=startwm.sh
; Give in full path or relative path to /etc/xrdp
ReconnectScript=reconnectwm.sh

Thanks after changing instruction in Ubuntu 20.04 Then, VNC security level and VNC password ok is appeared. But still the display not appeared.(closes within a few seconds)

I will show you

1)Changed Instruction 2)/var/log/xrdp.log 3)/var/log/xrdp-sesman.log 4)~/.xsession-errors on Ubuntu 20.04 5)/etc/xrdp/startwm.sh

Current instruction is follows on Ubuntu 20.04 (I also doing Ubuntu 18.04 first time is same behavior, but second time and after VNC error 1 after security negotiation) The out put difference is .xsession-errors

sudo apt update
sudo apt -y install xrdp tigervnc-standalone-server
sudo systemctl enable xrdp
sudo adduser testuser

xrdp.log is follows

[20201209-15:09:04] [DEBUG] xrdp_wm_log_msg: connecting to sesman ip 127.0.0.1 port 3350
[20201209-15:09:04] [INFO ] xrdp_wm_log_msg: sesman connect ok
[20201209-15:09:04] [DEBUG] xrdp_wm_log_msg: sending login info to session manager, please wait...
[20201209-15:09:04] [DEBUG] return value from xrdp_mm_connect 0
[20201209-15:09:04] [INFO ] xrdp_wm_log_msg: login successful for display 10
[20201209-15:09:04] [DEBUG] xrdp_wm_log_msg: VNC started connecting
[20201209-15:09:04] [DEBUG] xrdp_wm_log_msg: VNC connecting to 127.0.0.1 5910
[20201209-15:09:04] [DEBUG] xrdp_wm_log_msg: VNC tcp connected
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC security level is 2 (1 = none, 2 = standard)
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC password ok
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC sending share flag
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC receiving server init
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC receiving pixel format
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC receiving name length
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC receiving name
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC sending pixel format
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC sending encodings
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC sending framebuffer update request
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC sending cursor
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: VNC connection complete, connected ok
[20201209-15:09:05] [DEBUG] xrdp_wm_log_msg: connected ok
[20201209-15:09:05] [DEBUG] xrdp_mm_connect_chansrv: chansrv connect successful
[20201209-15:09:05] [DEBUG] Closed socket 16 (AF_INET6 ::1 port 58256)
[20201209-15:09:05] [DEBUG] VNC got clip data
[20201209-15:09:07] [DEBUG] Closed socket 12 (AF_INET6 ::ffff:172.31.39.121 port 3389)
[20201209-15:09:07] [DEBUG] xrdp_mm_module_cleanup
[20201209-15:09:07] [DEBUG] VNC mod_exit
[20201209-15:09:07] [DEBUG] Closed socket 17 (AF_INET6 ::ffff:127.0.0.1 port 54976)
[20201209-15:09:07] [DEBUG] Closed socket 18 (AF_UNIX)

xrdp-sesman.log

[20201209-15:09:04] [INFO ] A connection received from ::1 port 58256
[20201209-15:09:04] [INFO ] ++ created session (access granted): username sakaia, ip ::ffff:180.34.134.48:49887 - socket: 12
[20201209-15:09:04] [INFO ] starting Xvnc session...
[20201209-15:09:04] [DEBUG] Closed socket 9 (AF_INET6 :: port 5910)
[20201209-15:09:04] [DEBUG] Closed socket 9 (AF_INET6 :: port 6010)
[20201209-15:09:04] [DEBUG] Closed socket 9 (AF_INET6 :: port 6210)
[20201209-15:09:04] [INFO ] calling auth_start_session from pid 1996
[20201209-15:09:04] [DEBUG] Closed socket 8 (AF_INET6 ::1 port 3350)
[20201209-15:09:04] [DEBUG] Closed socket 7 (AF_INET6 ::1 port 3350)
[20201209-15:09:04] [DEBUG] Closed socket 8 (AF_INET6 ::1 port 3350)
[20201209-15:09:04] [INFO ] Xvnc :10 -auth .Xauthority -geometry 1920x1080 -depth 32 -rfbauth /home/sakaia/.vnc/sesman_passwd-sakaia@ip-172-31-39-121:10 -bs -nolisten tcp -localhost -dpi 96
[20201209-15:09:04] [CORE ] waiting for window manager (pid 2016) to exit
[20201209-15:09:07] [CORE ] window manager (pid 2016) did exit, cleaning up session
[20201209-15:09:07] [INFO ] calling auth_stop_session and auth_end from pid 1996
[20201209-15:09:07] [DEBUG] cleanup_sockets:
[20201209-15:09:07] [DEBUG] cleanup_sockets: deleting /run/xrdp/sockdir/xrdp_chansrv_audio_out_socket_10
[20201209-15:09:07] [DEBUG] cleanup_sockets: deleting /run/xrdp/sockdir/xrdp_chansrv_audio_in_socket_10
[20201209-15:09:07] [DEBUG] cleanup_sockets: deleting /run/xrdp/sockdir/xrdpapi_10
[20201209-15:09:07] [INFO ] ++ terminated session:  username sakaia, display :10.0, session_pid 1996, ip ::ffff:180.34.134.48:49887 - socket: 12

~/.xsession-errors

Xsession: X session started for sakaia at Wed Dec  9 15:09:04 UTC 2020
dbus-update-activation-environment: setting DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1001/bus
dbus-update-activation-environment: setting DISPLAY=:10.0
localuser:sakaia being added to access control list
dbus-update-activation-environment: setting QT_ACCESSIBILITY=1
dbus-update-activation-environment: setting USER=sakaia
dbus-update-activation-environment: setting XDG_SESSION_TYPE=x11
dbus-update-activation-environment: setting HOME=/home/sakaia
dbus-update-activation-environment: setting XRDP_PULSE_SINK_SOCKET=xrdp_chansrv_audio_out_socket_10
dbus-update-activation-environment: setting XRDP_PULSE_SOURCE_SOCKET=xrdp_chansrv_audio_in_socket_10
dbus-update-activation-environment: setting DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1001/bus
dbus-update-activation-environment: setting PULSE_SCRIPT=/etc/xrdp/pulse/default.pa
dbus-update-activation-environment: setting IM_CONFIG_PHASE=1
dbus-update-activation-environment: setting LOGNAME=sakaia
dbus-update-activation-environment: setting XDG_SESSION_CLASS=user
dbus-update-activation-environment: setting PATH=/sbin:/bin:/usr/bin:/usr/local/bin:/snap/bin
dbus-update-activation-environment: setting XRDP_SOCKET_PATH=/run/xrdp/sockdir
dbus-update-activation-environment: setting XDG_RUNTIME_DIR=/run/user/1001
dbus-update-activation-environment: setting DISPLAY=:10.0
dbus-update-activation-environment: setting LANG=C.UTF-8
dbus-update-activation-environment: setting UID=1001
dbus-update-activation-environment: setting SHELL=/bin/bash
dbus-update-activation-environment: setting QT_ACCESSIBILITY=1
dbus-update-activation-environment: setting XRDP_SESSION=1
dbus-update-activation-environment: setting GPG_AGENT_INFO=/run/user/1001/gnupg/S.gpg-agent:0:1
dbus-update-activation-environment: setting PWD=/home/sakaia
dbus-update-activation-environment: setting XDG_DATA_DIRS=/usr/share/gnome:/usr/local/share:/usr/share:/var/lib/snapd/desktop

startwm.sh

#!/bin/sh
# xrdp X session start script (c) 2015, 2017 mirabilos
# published under The MirOS Licence

if test -r /etc/profile; then
        . /etc/profile
fi

if test -r /etc/default/locale; then
        . /etc/default/locale
        test -z "${LANG+x}" || export LANG
        test -z "${LANGUAGE+x}" || export LANGUAGE
        test -z "${LC_ADDRESS+x}" || export LC_ADDRESS
        test -z "${LC_ALL+x}" || export LC_ALL
        test -z "${LC_COLLATE+x}" || export LC_COLLATE
        test -z "${LC_CTYPE+x}" || export LC_CTYPE
        test -z "${LC_IDENTIFICATION+x}" || export LC_IDENTIFICATION
        test -z "${LC_MEASUREMENT+x}" || export LC_MEASUREMENT
        test -z "${LC_MESSAGES+x}" || export LC_MESSAGES
        test -z "${LC_MONETARY+x}" || export LC_MONETARY
        test -z "${LC_NAME+x}" || export LC_NAME
        test -z "${LC_NUMERIC+x}" || export LC_NUMERIC
        test -z "${LC_PAPER+x}" || export LC_PAPER
        test -z "${LC_TELEPHONE+x}" || export LC_TELEPHONE
        test -z "${LC_TIME+x}" || export LC_TIME
        test -z "${LOCPATH+x}" || export LOCPATH
fi

if test -r /etc/profile; then
        . /etc/profile
fi

test -x /etc/X11/Xsession && exec /etc/X11/Xsession
exec /bin/sh /etc/X11/Xsession
matt335672 commented 3 years ago

Thanks for the detailed update.

Can you edit /etc/xrdp/startwm.sh and replace this line:-

exec /bin/sh /etc/X11/Xsession

with

exec /bin/sh -x /etc/X11/Xsession

The '-x' will add more detailed tracing info to ~/.xsession-errors.

Then try again and repost two files:-

1) The contents of ~/.xsession-errors 2) The output of the command sudo journalctl -S -5m. This is the output of the system log over the last 5 minutes.

sakaia commented 3 years ago

I attach the files. from the logs, it seems gnome problem.

Dec 11 14:50:15 ip-172-31-30-164 systemd[13999]: Starting GNOME Session Manager (session: gnome)...
Dec 11 14:50:15 ip-172-31-30-164 gnome-session[14136]: gnome-session-binary[14136]: CRITICAL: We failed, but the fail whale is dead. Sorry....
Dec 11 14:50:15 ip-172-31-30-164 gnome-session-binary[14136]: CRITICAL: We failed, but the fail whale is dead. Sorry....
Dec 11 14:50:15 ip-172-31-30-164 systemd[13999]: gnome-session-manager@gnome.service: Main process exited, code=exited, status=1/FAILURE
Dec 11 14:50:15 ip-172-31-30-164 systemd[13999]: gnome-session-manager@gnome.service: Failed with result 'exit-code'.
Dec 11 14:50:15 ip-172-31-30-164 systemd[13999]: Stopped GNOME Session Manager (session: gnome).

I am still using ubuntu 20.04 with following command (not changed with previous)

sudo apt update
sudo apt install xrdp tigervnc-standalone-server
sudo systemctl enable xrdp
sudo adduser sakaia
sudo gpasswd -a sakaia sudo

xsession-errors.txt journalctl-S-3m.txt

matt335672 commented 3 years ago

Thanks for that.

Significant errors are:-

Dec 11 14:50:15 ip-172-31-30-164 gnome-session[14088]: gnome-session-check-accelerated: GL Helper exited with code 512
Dec 11 14:50:15 ip-172-31-30-164 gnome-session[14088]: gnome-session-check-accelerated: GLES Helper exited with code 512

You GNOME session is unable to find the 3d extensions it needs to run.

I can see two possibilities here:- 1) Your EC2 instance has an NVidia GPU driver installed. This can mess up the 3D acceleration used by TigerVNC which is required by GNOME (see #1519). From your description above this seems unlikely. If this is the case however you will need to use a different desktop. 2) Your software installation is incomplete and something is missing which allows the VNC server to provide the required extensions.

Can you get to the console of your EC2 VM? Do you have a graphical login working on that?

sakaia commented 3 years ago

Thanks I avoid this problem with using XrdpXorg not Xvnc.

Instruction is follows. https://www.yamamanx.com/ec2-ubuntu-desktop-rdp/

sakaia commented 3 years ago

From your description above this seems unlikely. If this is the case however you will need to use a different desktop.

Thanks for your comments. I update the package. and successfully works. I add following commands.

sudo apt upgrade