uyuni-project / uyuni

Source code for Uyuni
https://www.uyuni-project.org/
GNU General Public License v2.0
411 stars 171 forks source link

After migration, from "legacy" 2024.05 Uyuni server to Podman container on openSUSE Leap Micro, Uyuni/Spacewalk not functional on new containerized instance #8946

Open gabjef opened 1 week ago

gabjef commented 1 week ago

Problem description

After migration, from "legacy" 2024.05 Uyuni server to Podman container on openSUSE Leap Micro, Uyuni/Spacewalk is not functional on new containerized uyuni-server instance

Steps to reproduce

  1. Run mgradm migrate podman uyuni-lab-linux-mgmt.lab.tierpoint.com
  2. Check container health/status on new containerized Uyuni instance
  3. Test Uyuni functionality of new containerized instance
  4. Status checks and tests show a number of runtime issues

Uyuni version

--- System Info & Versions ---

Source Legacy System / openSUSE Leap 15.5 - uyuni-lab-linux-mgmt.lab.tierpoint.com

uyunisvc@uyuni-lab-linux-mgmt:~> hostname -f
uyuni-lab-linux-mgmt.lab.tierpoint.com

uyunisvc@uyuni-lab-linux-mgmt:~> distro-3.6
Name: openSUSE Leap 15.5
Version: 15.5

uyunisvc@uyuni-lab-linux-mgmt:~> uname -r
5.14.21-150500.55.62-default

uyunisvc@uyuni-lab-linux-mgmt:~> zypper info Uyuni-Server-release | grep Version
Version        : 2024.05-230900.217.1.uyuni3

Target Container System / openSUSE Leap Micro 5.5 - uyuni-lab-micro-suse-migration.lab.tierpoint.com

uyuni-lab-micro-suse-migration:~ # hostname
uyuni-lab-micro-suse-migration.lab.tierpoint.com

uyuni-lab-micro-suse-migration:~ # grep PRETTY_NAME /etc/os-release
PRETTY_NAME="openSUSE Leap Micro 5.5"

uyuni-lab-micro-suse-migration:~ # uname -r
5.14.21-150500.55.65-default

uyuni-lab-micro-suse-migration:~ # mgradm -v
mgradm version 0.1.9 (Master 612f54e)

uyuni-lab-micro-suse-migration:~ # mgrctl -v
mgrctl version 0.1.9 (Master 612f54e)

uyuni-lab-micro-suse-migration:~ # podman --version
podman version 4.8.3

Uyuni proxy version (if used)

N/A

Useful logs

--- Perform migration ---

uyuni-lab-micro-suse-migration:~ # mgradm migrate podman uyuni-lab-linux-mgmt.lab.tierpoint.com  2>&1 | tee migrate-log.out
9:36PM INF Welcome to mgradm
9:36PM INF Executing command: podman
9:36PM INF Ensure image registry.opensuse.org/uyuni/server:latest is available
9:36PM INF Cannot find RPM image for registry.opensuse.org/uyuni/server:latest
9:36PM INF Running podman pull registry.opensuse.org/uyuni/server:latest
Trying to pull registry.opensuse.org/uyuni/server:latest...
Getting image source signatures
Copying blob sha256:e96f257c1066f2df10c946eb56249b7e7a04dd40132f9bea140def5c922bf8cb
Copying blob sha256:aff043c4adfc72cae220e95d7885039db9370e4498456c62ff813b4d78947af4
Copying blob sha256:960c4c21cbe595956e003b7684141d2ee38ffcb8a69082a5c8120b3abe04158f
Copying config sha256:7ed13b33dc203a5bc706e695cfd8f8efe77f0f3453335e423933ed83ad6e0272
Writing manifest to image destination
7ed13b33dc203a5bc706e695cfd8f8efe77f0f3453335e423933ed83ad6e0272
9:45PM INF Migrating server
Stopping spacewalk service...
Shutting down spacewalk services...
Done.
Stopping posgresql service...
   .
   .
 <rsync output...truncated for readability>
   .
   .
Migrating auto-installable distributions...
Extracting time zone...
Extracting postgresql versions...
Altering configuration for domain resolution...
Altering configuration for container environment...
DONE
2:06AM INF Previous PostgreSQL is 14, new one is 16. Performing a DB version upgrade...
2:06AM INF Ensure image registry.opensuse.org/uyuni/server-migration-14-16:latest is available
2:06AM INF Cannot find RPM image for registry.opensuse.org/uyuni/server-migration-14-16:latest
2:06AM INF Running podman pull registry.opensuse.org/uyuni/server-migration-14-16:latest
Trying to pull registry.opensuse.org/uyuni/server-migration-14-16:latest...
Getting image source signatures
Copying blob sha256:aff043c4adfc72cae220e95d7885039db9370e4498456c62ff813b4d78947af4
Copying blob sha256:1d7dda58a23452ee32dbb6e12351a9cf7d5ca5fba565e49a30e4aaa586d4c127
Copying blob sha256:960c4c21cbe595956e003b7684141d2ee38ffcb8a69082a5c8120b3abe04158f
Copying config sha256:aaeb7268fb86a13bb0c41006ca26b6609a2a7e5d32916a0215be3ef75778f563
Writing manifest to image destination
aaeb7268fb86a13bb0c41006ca26b6609a2a7e5d32916a0215be3ef75778f563
2:08AM INF Using migration image registry.opensuse.org/uyuni/server-migration-14-16:latest
PostgreSQL version upgrade
Testing presence of postgresql16...
Testing presence of postgresql14...
Create a backup at /var/lib/pgsql/data-pg14...
Create new database directory...
Enforce key permission
Initialize new postgresql 16 database...
Running initdb using postgres user
Any suggested command from the console should be run using postgres user
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/pgsql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /var/lib/pgsql/data -l logfile start

Successfully initialized new postgresql 16 database.
Performing Consistency Checks
-----------------------------
Checking cluster versions                                     ok
Checking database user is the install user                    ok
Checking database connection settings                         ok
Checking for prepared transactions                            ok
Checking for system-defined composite types in user tables    ok
Checking for reg* data types in user tables                   ok
Checking for contrib/isn with bigint-passing mismatch         ok
Checking for incompatible "aclitem" data type in user tables  ok
Creating dump of global objects                               ok
Creating dump of database schemas                             ok
Checking for presence of required libraries                   ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for new cluster tablespace directories               ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Setting locale and encoding for new cluster                   ok
Analyzing all rows in the new cluster                         ok
Freezing all rows in the new cluster                          ok
Deleting files from new pg_xact                               ok
Copying old pg_xact to new server                             ok
Setting oldest XID for new cluster                            ok
Setting next transaction ID and epoch for new cluster         ok
Deleting files from new pg_multixact/offsets                  ok
Copying old pg_multixact/offsets to new server                ok
Deleting files from new pg_multixact/members                  ok
Copying old pg_multixact/members to new server                ok
Setting next multixact ID and offset for new cluster          ok
Resetting WAL archives                                        ok
Setting frozenxid and minmxid counters in new cluster         ok
Restoring global objects in the new cluster                   ok
Restoring database schemas in the new cluster                 ok
Adding ".old" suffix to old global/pg_control                 ok

If you want to start the old cluster, you will need to remove
the ".old" suffix from /var/lib/pgsql/data-pg14/global/pg_control.old.
Because "link" mode was used, the old cluster cannot be safely
started once the new cluster has been started.
Linking user relation files                                   ok
Setting next OID for new cluster                              ok
Sync data directory to disk                                   ok
Creating script to delete old cluster                         ok
Checking for extension updates                                ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade.
Once you start the new server, consider running:
    /usr/lib/postgresql16/bin/vacuumdb --all --analyze-in-stages
Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
DONE
Running smdba system-check autotuning...
INFO: max_connections should be at least 200
INFO: Database configuration has been changed.
INFO: Wrote new general configuration. Backup as /var/lib/pgsql/data/postgresql.2024-06-13-02-09-14.conf
INFO: Wrote new client auth configuration. Backup as /var/lib/pgsql/data/pg_hba.2024-06-13-02-09-14.conf
INFO: Configuration has been changed, but your database is right now offline.
Database is offline
System check finished
Starting Postgresql...
===============================================================================
!
! This shell operates within a container environment, meaning that not all
! modifications will be permanently saved in volumes.
!
! Please exercise caution when making changes, as some alterations may not
! persist beyond the current session.
!
===============================================================================
2024-06-13 02:09:15.124 UTC   [58]LOG:  redirecting log output to logging collector process
2024-06-13 02:09:15.124 UTC   [58]HINT:  Future log output will appear in directory "log".
Reindexing database. This may take a while, please do not cancel it!
REINDEX
Schema update...
report_db_host = localhost
Your database schema already matches the schema package version [susemanager-schema-5.0.7-230900.1.2.uyuni3].
Schema upgrade: [susemanager-schema-5.0.7-230900.1.2.uyuni3] -> [susemanager-schema-5.0.7-230900.1.2.uyuni3]
Your database schema already matches the schema package version [uyuni-reportdb-schema-5.0.5-230900.1.2.uyuni3].
Schema upgrade: [uyuni-reportdb-schema-5.0.5-230900.1.2.uyuni3] -> [uyuni-reportdb-schema-5.0.5-230900.1.2.uyuni3]
Updating auto-installable distributions...
SELECT 0
UPDATE 0
DROP TABLE
Schedule a system list update task...
INSERT 0 0
Stopping Postgresql...
===============================================================================
!
! This shell operates within a container environment, meaning that not all
! modifications will be permanently saved in volumes.
!
! Please exercise caution when making changes, as some alterations may not
! persist beyond the current session.
!
===============================================================================
DONE
2:23AM INF Setting up uyuni network
2:23AM INF Enabling system service
2:23AM INF Server migrated

--- Checking status with mgradm status ---

First note the failed status check for uyuni-server-attestation.service

uyuni-lab-micro-suse-migration:~ # mgradm status > /dev/null
Unit uyuni-server-attestation.service could not be found.
Error: failed to get status of the server service: exit status 4

Rest of the services seem to be there

uyuni-lab-micro-suse-migration:~ # mgradm status 2> /dev/null
11:06PM INF Welcome to mgradm
11:06PM INF Executing command: status
11:06PM INF ● mgr-check-payg.service - Check and install payg billing service.
     Loaded: loaded (/usr/lib/systemd/system/mgr-check-payg.service; static)
     Active: active (exited) since Thu 2024-06-13 23:32:21 EDT; 19h ago
    Process: 42 ExecStart=/usr/sbin/spacewalk-startup-helper check-billing-service (code=exited, status=0/SUCCESS)
   Main PID: 42 (code=exited, status=0/SUCCESS)

● uyuni-update-config.service - Uyuni update config
     Loaded: loaded (/usr/lib/systemd/system/uyuni-update-config.service; static)
     Active: active (exited) since Thu 2024-06-13 23:32:56 EDT; 19h ago
    Process: 373 ExecStart=/usr/sbin/uyuni-update-config (code=exited, status=0/SUCCESS)
   Main PID: 373 (code=exited, status=0/SUCCESS)

● uyuni-check-database.service - Uyuni check database
     Loaded: loaded (/usr/lib/systemd/system/uyuni-check-database.service; static)
     Active: active (exited) since Thu 2024-06-13 23:33:00 EDT; 19h ago
    Process: 394 ExecStart=/usr/sbin/spacewalk-startup-helper check-database (code=exited, status=0/SUCCESS)
   Main PID: 394 (code=exited, status=0/SUCCESS)

× tomcat.service - Apache Tomcat Web Application Container
     Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/tomcat.service.d
             └─jmx.conf, override.conf
     Active: failed (Result: exit-code) since Thu 2024-06-13 23:33:00 EDT; 19h ago
    Process: 802 ExecStart=/usr/lib/tomcat/server start (code=exited, status=1/FAILURE)
   Main PID: 802 (code=exited, status=1/FAILURE)

● spacewalk-wait-for-tomcat.service - Spacewalk wait for tomcat
     Loaded: loaded (/usr/lib/systemd/system/spacewalk-wait-for-tomcat.service; static)
     Active: active (exited) since Thu 2024-06-13 23:33:34 EDT; 19h ago
    Process: 803 ExecStart=/usr/sbin/spacewalk-startup-helper wait-for-tomcat (code=exited, status=0/SUCCESS)
   Main PID: 803 (code=exited, status=0/SUCCESS)

● salt-master.service - The Salt Master Server
     Loaded: loaded (/usr/lib/systemd/system/salt-master.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/salt-master.service.d
             └─override.conf
     Active: active (running) since Thu 2024-06-13 23:33:00 EDT; 19h ago
       Docs: man:salt-master(1)
             file:///usr/share/doc/salt/html/contents.html
             https://docs.saltproject.io/en/latest/contents.html
   Main PID: 785 (salt-master)
     CGroup: /machine.slice/libpod-967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938.scope/system.slice/salt-master.service
             ├─   785 /usr/bin/python3 /usr/bin/salt-master
             ├─  1101 /usr/bin/python3 /usr/bin/salt-master
             ├─  1102 /usr/bin/python3 /usr/bin/salt-master
             ├─  1106 /usr/bin/python3 /usr/bin/salt-master
             ├─  1111 /usr/bin/python3 /usr/bin/salt-master
             ├─  1113 /usr/bin/python3 /usr/bin/salt-master
             ├─  1114 /usr/bin/python3 /usr/bin/salt-master
             ├─  1115 /usr/bin/python3 /usr/bin/salt-master
             ├─  1116 /usr/bin/python3 /usr/bin/salt-master
             ├─  1117 /usr/bin/python3 /usr/bin/salt-master
             ├─  1123 /usr/bin/python3 /usr/bin/salt-master
             ├─  1128 /usr/bin/python3 /usr/bin/salt-master
             ├─  1129 /usr/bin/python3 /usr/bin/salt-master
             ├─  1130 /usr/bin/python3 /usr/bin/salt-master
             ├─  1132 /usr/bin/python3 /usr/bin/salt-master
             ├─ 23025 /usr/bin/python3 /usr/bin/salt-master
             └─ 24284 /usr/bin/python3 /usr/bin/salt-master

● salt-api.service - The Salt API
     Loaded: loaded (/usr/lib/systemd/system/salt-api.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/salt-api.service.d
             └─override.conf
     Active: active (running) since Thu 2024-06-13 23:33:00 EDT; 19h ago
       Docs: man:salt-api(1)
             file:///usr/share/doc/salt/html/contents.html
             https://docs.
11:06PM INF saltproject.io/en/latest/contents.html
   Main PID: 784 (salt-api)
     CGroup: /machine.slice/libpod-967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938.scope/system.slice/salt-api.service
             ├─ 784 /usr/bin/python3 /usr/bin/salt-api
             └─ 877 /usr/bin/python3 /usr/bin/salt-api

● spacewalk-wait-for-salt.service - Make sure that salt is started before httpd
     Loaded: loaded (/usr/lib/systemd/system/spacewalk-wait-for-salt.service; static)
     Active: active (exited) since Thu 2024-06-13 23:33:00 EDT; 19h ago
    Process: 787 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
   Main PID: 787 (code=exited, status=0/SUCCESS)

● apache2.service - The Apache Webserver
     Loaded: loaded (/usr/lib/systemd/system/apache2.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/apache2.service.d
             └─override.conf
     Active: active (running) since Thu 2024-06-13 23:33:34 EDT; 19h ago
    Process: 6336 ExecReload=/usr/sbin/start_apache2 -DSYSTEMD -DFOREGROUND -k graceful (code=exited, status=0/SUCCESS)
   Main PID: 2242 (httpd-prefork)
     Status: "Configuration loaded."
     CGroup: /machine.slice/libpod-967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938.scope/system.slice/apache2.service
             ├─ 2242 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start
             ├─ 6304 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start
             ├─ 6356 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start
             ├─ 6357 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start
             ├─ 6358 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start
             ├─ 6359 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start
             ├─ 6360 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start
             └─ 6380 /usr/sbin/httpd-prefork -DSYSCONFIG -DSSL -DISSUSE -C "PidFile /run/httpd.pid" -C "Include /etc/apache2/sysconfig.d//loadmodule.conf" -C "Include /etc/apache2/sysconfig.d//global.conf" -f /etc/apache2/httpd.conf -c "Include /etc/apache2/sysconfig.d//include.conf" -DSYSTEMD -DFOREGROUND -k start

● rhn-search.service - Spacewalk search engine
     Loaded: loaded (/usr/lib/systemd/system/rhn-search.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/rhn-search.service.d
             �
11:06PM INF ��─override.conf
     Active: active (running) since Thu 2024-06-13 23:33:00 EDT; 19h ago
   Main PID: 783 (rhn-search)
     CGroup: /machine.slice/libpod-967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938.scope/system.slice/rhn-search.service
             ├─ 783 /bin/sh /usr/sbin/rhn-search
             └─ 805 /usr/bin/java -Djava.library.path=/usr/lib:/usr/lib64:/usr/lib/gcj/postgresql-jdbc:/usr/lib64/gcj/postgresql-jdbc -classpath "/usr/share/rhn/search/lib/*:/usr/share/rhn/classes:/usr/share/rhn/lib/spacewalk-asm.jar:/usr/share/rhn/lib/rhn.jar:/usr/share/rhn/lib/java-branding.jar" -Dfile.encoding=UTF-8 -Xms32m -Xmx512m -Dlog4j2.configurationFile=/usr/share/rhn/search/classes/log4j2.xml com.redhat.satellite.search.Main

● cobblerd.service - Cobbler Helper Daemon
     Loaded: loaded (/usr/lib/systemd/system/cobblerd.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2024-06-13 23:33:02 EDT; 19h ago
   Main PID: 781 (cobblerd)
     CGroup: /machine.slice/libpod-967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938.scope/system.slice/cobblerd.service
             └─ 781 /usr/bin/python3 -s /usr/bin/cobblerd -F

● taskomatic.service - Taskomatic
     Loaded: loaded (/usr/lib/systemd/system/taskomatic.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/taskomatic.service.d
             └─jmx.conf, override.conf
     Active: active (running) since Thu 2024-06-13 23:33:34 EDT; 19h ago
   Main PID: 2248 (taskomatic)
     CGroup: /machine.slice/libpod-967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938.scope/system.slice/taskomatic.service
             ├─ 2248 /bin/sh /usr/sbin/taskomatic
             └─ 2260 /usr/bin/java -Djava.library.path=/usr/lib:/usr/lib64 -classpath "/usr/share/rhn/classes:/usr/share/rhn/lib/spacewalk-asm.jar:/usr/share/rhn/lib/rhn.jar:/usr/share/rhn/lib/java-branding.jar:/usr/share/spacewalk/taskomatic/*" -Dibm.dst.compatibility=true -Dfile.encoding=UTF-8 -Xms256m -Xmx4096m --add-exports java.annotation/javax.annotation.security=ALL-UNNAMED --add-opens java.annotation/javax.annotation.security=ALL-UNNAMED -javaagent:/usr/share/java/jmx_prometheus_javaagent.jar=5557:/etc/prometheus-jmx_exporter/taskomatic/java_agent.yml com.redhat.rhn.taskomatic.core.TaskomaticDaemon

● spacewalk-wait-for-taskomatic.service - Spacewalk wait for taskomatic
     Loaded: loaded (/usr/lib/systemd/system/spacewalk-wait-for-taskomatic.service; static)
     Active: active (exited) since Thu 2024-06-13 23:33:40 EDT; 19h ago
    Process: 2250 ExecStart=/usr/sbin/spacewalk-startup-helper wait-for-taskomatic (code=exited, status=0/SUCCESS)
   Main PID: 2250 (code=exited, status=0/SUCCESS)

● salt-secrets-config.service - Configures secrets between salt-master and other services
     Loaded: loaded (/usr/lib/systemd/system/salt-secrets-config.service; static)
    Drop-In: /usr/lib/systemd/system/salt-secrets-config.service.d
             └─override.conf
     Active: active (exited) since Thu 2024-06-13 23:32:22 EDT; 19h ago
    Process: 43 ExecStart=/usr/bin/salt-secrets-config.py (code=exited, status=0/SUCCESS)
   Main PID: 43 (code=exited, status=0/SUCCESS)

● mgr-websockify.service - TCP to WebSocket proxy
     Loaded: loaded (/usr/lib/systemd/system/mgr-websockify.service; static)
     Active: active (running) since Thu 2024-06-13 23:33:00 EDT; 19h ago
    Process: 782 ExecStartPre=/usr/bin/sh -c grep secret_key /etc/rhn/rhn.conf | tr -d ' ' | cut -f2 -d '=' | perl -ne 's/([0-9a-f]{2})/print chr hex $1/gie' > /etc/rhn/websockify.key (code=exited, status=0/SUCCESS)
   Main PID: 804 (websockify)
     CGroup: /machine.slice/libpod-967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938.scope/system.slice/mgr-websockify.service
             └─ 804 /usr/bin/python3 /usr/bin/websockify --token-plugin JWTTokenApi --token-source /etc/rhn/websockify.key localhost:8050

● cobbler-refresh-mkloaders.service - Refresh Cobbler bootloaders
     Loaded: loaded (/usr/lib/systemd/system/cobbler-
11:06PM INF refresh-mkloaders.service; static)
    Drop-In: /usr/lib/systemd/system/cobbler-refresh-mkloaders.service.d
             └─override.conf
     Active: active (exited) since Thu 2024-06-13 23:33:37 EDT; 19h ago
    Process: 2249 ExecStart=/usr/bin/cobbler mkloaders (code=exited, status=0/SUCCESS)
   Main PID: 2249 (code=exited, status=0/SUCCESS)
11:06PM INF ● spacewalk.target - Spacewalk
     Loaded: loaded (/usr/lib/systemd/system/spacewalk.target; enabled; vendor preset: disabled)
     Active: active since Thu 2024-06-13 23:33:40 EDT; 19h ago

--- mgradm inspect fails ---

uyuni-lab-micro-suse-migration:~ # mgradm inspect
8:07PM INF Welcome to mgradm
8:07PM INF Executing command: inspect
8:08PM INF Ensure image registry.opensuse.org/uyuni/server:latest is available
Error: inspect command failed: cannot inspect data. cannot read config: While parsing config: line `uyuni-lab-linux-mgmt.lab.tierpoint.com` doesn't match format

--- podman ps shows container as unhealthy ---

uyuni-lab-micro-suse-migration:~ # podman ps --no-trunc
CONTAINER ID                                                      IMAGE                                      COMMAND                   CREATED       STATUS
    PORTS
                                                                                      NAMES
967b82da9b6ae93e3fa104171f6294cb6555d2fbf7fc63ea6feaab68c3844938  registry.opensuse.org/uyuni/server:latest  /usr/lib/systemd/systemd  20 hours ago  Up 20 hours (unhealthy)  0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:4505-4506->4505-4506/tcp, 0.0.0.0:5432->5432/tcp, 0.0.0.0:5556-5557->5556-5557/tcp, 0.0.0.0:9100->9100/tcp, 0.0.0.0:9187->9187/tcp, 0.0.0.0:9800->9800/tcp, 0.0.0.0:25151->25151/tcp, 0.0.0.0:69->69/udp  uyuni-server

--- Shell into the container and check for Spacewalk & port 8009 there ---

uyuni-lab-micro-suse-migration:~ # mgrctl term
===============================================================================
!
! This shell operates within a container environment, meaning that not all
! modifications will be permanently saved in volumes.
!
! Please exercise caution when making changes, as some alterations may not
! persist beyond the current session.
!
===============================================================================
uyuni-server:/ # hostname -f
uyuni-server.mgr.internal

Try spacecmd first

uyuni-server:/ # spacecmd -d
DEBUG: command=, return_value=False
DEBUG: Read configuration from /root/.spacecmd/config
DEBUG: Loading configuration section [spacecmd]
DEBUG: Current Configuration: {'server': 'localhost', 'nossl': True, 'username': 'uyunisvc', 'password': '************'}
Welcome to spacecmd, a command-line interface to Spacewalk.

Type: 'help' for a list of commands
      'help <cmd>' for command-specific help
      'quit' to quit

DEBUG: Configuration section [localhost] does not exist
DEBUG: Connecting to http://localhost/rpc/api
ERROR: <class 'xmlrpc.client.ProtocolError'>
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/spacecmd/misc.py", line 295, in do_login
    self.api_version = self.client.api.getVersion()
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
    verbose=self.__verbose
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1187, in single_request
    dict(resp.getheaders())
xmlrpc.client.ProtocolError: <ProtocolError for localhost/rpc/api: 503 Service Unavailable>
ERROR: Failed to connect to http://localhost/rpc/api
DEBUG: Error while connecting to the server http://localhost/rpc/api: <ProtocolError for localhost/rpc/api: 503 Service Unavailable>

Nothing on port 8009

uyuni-server:/ # ss -antp|grep 8009
uyuni-server:/ #
uyuni-server:/ # curl -skS https://localhost:8009
curl: (7) Failed to connect to localhost port 8009 after 0 ms: Couldn't connect to server

--- Database status (seems okay) ---

uyuni-server:/ # journalctl -f -u uyuni-check-database.service
Jun 13 23:32:56 uyuni-server.mgr.internal systemd[1]: Starting Uyuni check database...
Jun 13 23:32:57 uyuni-server.mgr.internal spacewalk-startup-helper[488]: report_db_host = localhost
Jun 13 23:32:58 uyuni-server.mgr.internal spacewalk-startup-helper[540]: Your database schema already matches the schema package version [susemanager-schema-5.0.7-230900.1.2.uyuni3].
Jun 13 23:32:58 uyuni-server.mgr.internal spacewalk-startup-helper[540]: Schema upgrade: [susemanager-schema-5.0.7-230900.1.2.uyuni3] -> [susemanager-schema-5.0.7-230900.1.2.uyuni3]
Jun 13 23:32:59 uyuni-server.mgr.internal spacewalk-startup-helper[596]: Your database schema already matches the schema package version [uyuni-reportdb-schema-5.0.5-230900.1.2.uyuni3].
Jun 13 23:32:59 uyuni-server.mgr.internal spacewalk-startup-helper[596]: Schema upgrade: [uyuni-reportdb-schema-5.0.5-230900.1.2.uyuni3] -> [uyuni-reportdb-schema-5.0.5-230900.1.2.uyuni3]
Jun 13 23:33:00 uyuni-server.mgr.internal systemd[1]: Finished Uyuni check database.

Additional information

TBD

gabjef commented 1 week ago

Please let me what else you might need to help diagnose this issue. I can provide the entire migration log if needed, but it's 1 million+ lines.

rjmateus commented 1 week ago

tomcat service looks to have failled to start. You should enter in the container terminal (mgrctl term) and then check the tomcat logs in the same way it was on the rpm based version (/var/log/tomcat/) and also the service logs.

gabjef commented 1 week ago

@rjmateus Thanks for the follow up. Yes we figured out that Tomcat is not starting. The issue is that JAVA_OPTS, found in both /etc/tomcat/conf.d/remote_debug.conf and /etc/tomcat/tomcat.conf, is using options not supported by the JDK version used in the container:

uyuni-server:~ # systemctl status tomcat.service
× tomcat.service - Apache Tomcat Web Application Container
     Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/tomcat.service.d
             └─jmx.conf, override.conf
     Active: failed (Result: exit-code) since Thu 2024-06-20 23:32:59 EDT; 3h 36min ago
    Process: 810 ExecStart=/usr/lib/tomcat/server start (code=exited, status=1/FAILURE)
   Main PID: 810 (code=exited, status=1/FAILURE)

Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: classpath used: /usr/share/tomcat/bin/bootstrap.jar:/usr/share/tomcat/bin/tomcat-juli.jar:/usr/lib64/java/commons->
Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: main class used: org.apache.catalina.startup.Bootstrap
Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: flags used: -ea -Xms256m -Xmx1G -Djava.awt.headless=true -Dorg.xml.sax.driver=com.redhat.rhn.frontend.xmlrpc.util.>
Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: options used: -Dcatalina.base=/usr/share/tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.>
Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: arguments used: start
Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: Unrecognized VM option 'UseConcMarkSweepGC'
Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: Error: Could not create the Java Virtual Machine.
Jun 20 23:32:59 uyuni-server.mgr.internal server[810]: Error: A fatal exception has occurred. Program will exit.
Jun 20 23:32:59 uyuni-server.mgr.internal systemd[1]: tomcat.service: Main process exited, code=exited, status=1/FAILURE
Jun 20 23:32:59 uyuni-server.mgr.internal systemd[1]: tomcat.service: Failed with result 'exit-code'.

UseConcMarkSweepGC for garbage collection is not supported in JDK version 17 (17.0.11 in the container). Note that the JDK version on the source Uyuni server is 11.0.23.

Actually, we had to remove 2 additional options for Tomcat to finally start successfully:

  1. MaxNewSize=256
  2. --add-modules java.annotation,com.sun.xml.bind

Here is the 'default' JAVA_OPTS causing Tomcat to fail:

JAVA_OPTS="-ea -Xms256m -Xmx1G -Djava.awt.headless=true -Dorg.xml.sax.driver=com.redhat.rhn.frontend.xmlrpc.util.RhnSAXParser -Dorg.apache.tomcat.util.http.Parameters.MAX_COUNT=1024 -XX:MaxNewSize=256 -XX:-UseConcMarkSweepGC -Dnet.sf.ehcache.skipUpdateCheck=true --add-exports java.annotation/javax.annotation.security=ALL-UNNAMED --add-opens java.annotation/javax.annotation.security=ALL-UNNAMED  --add-modules java.annotation,com.sun.xml.bind"

This trimmed down JAVA_OPTS worked for us:

JAVA_OPTS="-ea -Xms256m -Xmx1G -Djava.awt.headless=true -Dorg.xml.sax.driver=com.redhat.rhn.frontend.xmlrpc.util.RhnSAXParser -Dorg.apache.tomcat.util.http.Parameters.MAX_COUNT=1024 -Dnet.sf.ehcache.skipUpdateCheck=true --add-exports java.annotation/javax.annotation.security=ALL-UNNAMED --add-opens java.annotation/javax.annotation.security=ALL-UNNAMED"

So we were able to get Spacewalk/Uyuni functionality up and running using the trimmed down version of JAVA_OPTS.

Note that the following issues still exist:

  1. uyuni-server-attestation.service is still not found by mgradm status
  2. mgradm inspect is still failing