containers / qm

QM is a containerized environment for running Functional Safety qm (Quality Management) software
https://github.com/containers/qm
GNU General Public License v2.0
20 stars 22 forks source link

set-ffi-env-e2e aborts when executing tests manually with tmt #521

Closed pengshanyu closed 1 week ago

pengshanyu commented 2 weeks ago

When running ffi test against local VM (provision via the c9s cloud image), the set-ffi-env-e2e script will run into an error:

 + '[' -z 'Starting setup' ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] Starting setup'
        + info_message ==============================
        + '[' -z ============================== ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] =============================='
        + '[' 0 -ne 0 ']'
        + echo
        + info_message 'Checking if QM already installed'
        + '[' -z 'Checking if QM already installed' ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] Checking if QM already installed'
        + info_message ==============================
        + '[' -z ============================== ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] =============================='
        ++ systemctl is-enabled qm
        + QM_STATUS='Failed to get unit file state for qm.service: No such file or directory' 
Yarboa commented 2 weeks ago

Thanks @pengshanyu

We need to add rpm -q qm verification before systemd checks

dougsland commented 2 weeks ago

@Yarboa we can work with @pengshanyu in others issues which involve more complex scenarios as she is already onboard the project. Let's keep onboarding nsednev. nsednev could you please investigate ?

Yarboa commented 2 weeks ago

@nsednev please check this fix, if you want to take it, assign your self to issue

@@ -276,8 +276,9 @@ fi
 echo
 info_message "Checking if QM already installed"
 info_message "=============================="
+QM_INST="$(rpm -q qm)"
 QM_STATUS="$(systemctl is-enabled qm 2>&1)"
-if [ "$QM_STATUS" == "generated" ]; then
+if [[ -n "$QM_INST" && "$QM_STATUS" == "generated" ]]; then
    if [ "$(systemctl is-active qm)" == "active" ]; then
        info_message "QM Enabled and Active"
        info_message "=============================="
dougsland commented 2 weeks ago

nsednev, here another suggestion (never tested), however less complicated the code better for us to maintain and easier to others to join us.

Why? Easier to maintain the separate logic into a single place and easy to keep our brain "safe"

@Yarboa Is it what you shared yesterday?

#!/bin/bash

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Function to check QM package and service status
check_qm_status() {
    # Check if the 'qm' package is installed via RPM
    if ! rpm -q qm > /dev/null 2>&1; then
        info_message "QM package is not installed."
        info_message "=============================="
        return 1
    fi

    # Check the status of the 'qm' service
    local qm_status
    qm_status="$(systemctl is-enabled qm 2>&1)"
    if [ "$qm_status" == "generated" ]; then
        if [ "$(systemctl is-active qm)" == "active" ]; then
            info_message "QM Enabled and Active"
            info_message "=============================="
            return 0
        fi
        if [ -d /var/qm ] && [ -d /etc/qm ]; then
            info_message "QM Enabled and not Active"
            info_message "=============================="
            return 1
        fi
    fi

    # Check if the system is booted with OSTree
    if stat /run/ostree-booted > /dev/null 2>&1; then
        info_message "Warning: script cannot run on ostree image"
        info_message "=============================="
        return 0
    fi

    # If none of the above conditions were met
    info_message "QM service status is unclear."
    info_message "=============================="
    return 1
}

Calling it in the set-ffi-env-e2e:

check_qm_status
Yarboa commented 2 weeks ago

@dougsland sure Once it is working it will rearranged as you suggest, :+1:

nsednev commented 2 weeks ago

Might be related to this issue "QM_STATUS='Failed to get unit file state for qm.service: No such file or directory": We're receiving some errors now from the TC:

    script:
        cd tests/e2e
        ./set-ffi-env-e2e "${FFI_SETUP_OPTIONS}"
    fail: Command '/var/ARTIFACTS/work-ffiiaeheny1/plans/e2e/ffi/tree/tmt-prepare-wrapper.sh-Set-QM-env-default-0' returned 1.
finish

        summary: 0 tasks completed

plan failed

The exception was caused by 1 earlier exceptions

Cause number 1:

prepare step failed

The exception was caused by 1 earlier exceptions

Cause number 1:

    Command '/var/ARTIFACTS/work-ffiiaeheny1/plans/e2e/ffi/tree/tmt-prepare-wrapper.sh-Set-QM-env-default-0' returned 1.

    stdout (5 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    [ INFO  ] Starting setup
    [ INFO  ] ==============================

    [ INFO  ] Checking if QM already installed
    [ INFO  ] ==============================
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    stderr (77 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

. . .

Yarboa commented 2 weeks ago

@nsednev as mentioned in slack for c9s manual run, not in packit there is no qm installed at all,just need to add this check see the +/- signs relevant to main

https://github.com/containers/qm/issues/521#issuecomment-2315238908

nsednev commented 1 week ago

@Yarboa

@nsednev please check this fix, if you want to take it, assign your self to issue

@@ -276,8 +276,9 @@ fi
 echo
 info_message "Checking if QM already installed"
 info_message "=============================="
+QM_INST="$(rpm -qa qm)"
 QM_STATUS="$(systemctl is-enabled qm 2>&1)"
-if [ "$QM_STATUS" == "generated" ]; then
+if [[ -n "$QM_INST" && "$QM_STATUS" == "generated" ]]; then
    if [ "$(systemctl is-active qm)" == "active" ]; then
        info_message "QM Enabled and Active"
        info_message "=============================="

I don't see the same code in https://github.com/containers/qm/blob/3000572a0f2499f426338b35ed025dd8930b29c1/tests/e2e/set-ffi-env-e2e#L266 after your suggested changes I see the info message and thats it: info_message "QM Enabled and Active" info_message "=============================="

But I don't see the same code in https://github.com/containers/qm/blob/3000572a0f2499f426338b35ed025dd8930b29c1/tests/e2e/set-ffi-env-e2e#L266 I see:

Restart QM after mount /var on separate partition

   if grep -qi "${QC_SOC}" "${SOC_DISTRO_FILE}"; then
      systemctl restart qm
   fi

And only after that the info message appears.

So my question is about the restart of the QM part here:

Restart QM after mount /var on separate partition

   if grep -qi "${QC_SOC}" "${SOC_DISTRO_FILE}"; then
      systemctl restart qm
   fi

In your #521 (comment) this part is missing. Don't we want to preserve it?

nsednev commented 1 week ago
    [ INFO  ] Starting setup
    [ INFO  ] ==============================

    [ INFO  ] Check if qm requires additional partition
    [ INFO  ] ==============================

    [ INFO  ] Checking if QM already installed
    [ INFO  ] ==============================
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    stderr (103 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ++ date +%s
    + START_TIME=1725291227
    +++ dirname -- ./set-ffi-env-e2e
    ++ cd -- .
    ++ pwd
    + SCRIPT_DIR=/var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/utils
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/container
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/systemd
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/tests
    ++ NODES_FOR_TESTING_ARR='control qm-node1'
    ++ readarray -d ' ' -t NODES_FOR_TESTING
    ++ CONTROL_CONTAINER_NAME=control
    ++ WAIT_BLUECHI_AGENT_CONNECT=5
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/diskutils
    + export CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + export REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + export WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + export CONTROL_CONTAINER_NAME=control
    + CONTROL_CONTAINER_NAME=control
    + NODES_FOR_TESTING=('control' 'node1')
    + export NODES_FOR_TESTING
    + export IP_CONTROL_MACHINE=
    + IP_CONTROL_MACHINE=
    + export CONTAINER_CAP_ADD=
    + CONTAINER_CAP_ADD=
    + export ARCH=
    + ARCH=
    + export DISK=
    + DISK=
    + export PART_ID=
    + PART_ID=
    + export QC_SOC=SA8775P
    + QC_SOC=SA8775P
    + export SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + export QC_SOC_DISK=sde
    + QC_SOC_DISK=sde
    + export BUILD_BLUECHI_FROM_GH_URL=
    + BUILD_BLUECHI_FROM_GH_URL=
    + export QM_GH_URL=
    + QM_GH_URL=
    + export BRANCH_QM=
    + BRANCH_QM=
    + export SET_QM_PART=
    + SET_QM_PART=
    + export USE_QM_COPR=packit/containers-qm-532
    + USE_QM_COPR=packit/containers-qm-532
    + RED='\033[91m'
    + GRN='\033[92m'
    + CLR='\033[0m'
    + ARGUMENT_LIST=("qm-setup-from-gh-url" "branch-qm" "set-qm-disk-part" "use-qm-copr")
    +++ printf help,%s:, qm-setup-from-gh-url branch-qm set-qm-disk-part use-qm-copr
    +++ basename ./set-ffi-env-e2e
    ++ getopt --longoptions help,qm-setup-from-gh-url:,help,branch-qm:,help,set-qm-disk-part:,help,use-qm-copr:, --name set-ffi-env-e2e --options '' -- none
    + opts=' -- '\''none'\'''
    + eval set '-- -- '\''none'\'''
    ++ set -- -- none
    + '[' 2 -gt 0 ']'
    + case "$1" in
    + break
    + info_message 'Starting setup'
    + '[' -z 'Starting setup' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Starting setup'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' 0 -ne 0 ']'
    + stat /run/ostree-booted
    + echo
    + info_message 'Check if qm requires additional partition'
    + '[' -z 'Check if qm requires additional partition' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Check if qm requires additional partition'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' -n '' ']'
    + echo
    + info_message 'Checking if QM already installed'
    + '[' -z 'Checking if QM already installed' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Checking if QM already installed'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    ++ rpm -qa qm
    + QM_INST=qm-0.6.5-1.20240902150444282009.pr532.68.gf27cba2.el9.noarch
    + [[ -n qm-0.6.5-1.20240902150444282009.pr532.68.gf27cba2.el9.noarch ]]
    ./set-ffi-env-e2e: line 267: QM_STATUS: unbound variable
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

---^---^---^---^---^---

Yarboa commented 1 week ago

@nsednev How did you run tmt? Can you share it?

nsednev commented 1 week ago

I received this from when I didn't wiped out the QM restart part from the code. Its taken from the testing-farm:centos-stream-9-x86_64:e2e-ffi checkup from https://github.com/containers/qm/pull/532. Now that I finalized the code in PR, I see these: WARN[0010] StopSignal SIGTERM failed to stop container ffi-qm in 10 seconds, resorting to SIGKILL Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Trying to pull quay.io/centos-sig-automotive/ffi-tools:latest... Getting image source signatures Copying blob sha256:80c27f0a59c1ae0fb0437fc08ff5721fe093c488d6f5c5059745b3e57991f775 Copying blob sha256:ea58703461dace3304e355ec5c1ab4d72976ed546f2b896b8546c737ddc4c5b0 Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Writing manifest to image destination e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Getting image source signatures Copying blob sha256:d4cf3585b76c558f542b352c16e3df670a7ac4c4d655a7d618171a1e07a4e399 Copying blob sha256:c8fb351d6683cb7200fd6db901d7a33a67a1e4a52c7ed5b54135ab330bc24c90 Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Writing manifest to image destination Untagged: quay.io/centos-sig-automotive/ffi-tools:latest Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Getting image source signatures Writing manifest to image destination Error: OCI runtime error: crun: the requested cgroup controller pids is not available [ INFO ] PASS: qm.container oom_score_adj value == 500 ./test.sh: line 39: [: cat: /proc/0/oom_score_adj: No such file or directory: integer expression expected [ INFO ] FAIL: qm containers oom_score_adj != 750. Current value is cat: /proc/0/oom_score_adj: No such file or directory Shared connection to 3.15.160.149 closed.

Its available from here: https://artifacts.dev.testing-farm.io/38432c5c-1e7d-470e-a737-02109ab30c6e/

dougsland commented 1 week ago

Should be fixed soon via: https://github.com/containers/qm/pull/531

dougsland commented 1 week ago

I received this from when I didn't wiped out the QM restart part from the code. Its taken from the testing-farm:centos-stream-9-x86_64:e2e-ffi checkup from #532. Now that I finalized the code in PR, I see these: WARN[0010] StopSignal SIGTERM failed to stop container ffi-qm in 10 seconds, resorting to SIGKILL Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Trying to pull quay.io/centos-sig-automotive/ffi-tools:latest... Getting image source signatures Copying blob sha256:80c27f0a59c1ae0fb0437fc08ff5721fe093c488d6f5c5059745b3e57991f775 Copying blob sha256:ea58703461dace3304e355ec5c1ab4d72976ed546f2b896b8546c737ddc4c5b0 Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Writing manifest to image destination e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Getting image source signatures Copying blob sha256:d4cf3585b76c558f542b352c16e3df670a7ac4c4d655a7d618171a1e07a4e399 Copying blob sha256:c8fb351d6683cb7200fd6db901d7a33a67a1e4a52c7ed5b54135ab330bc24c90 Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Writing manifest to image destination Untagged: quay.io/centos-sig-automotive/ffi-tools:latest Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Getting image source signatures Writing manifest to image destination Error: OCI runtime error: crun: the requested cgroup controller pids is not available [ INFO ] PASS: qm.container oom_score_adj value == 500 ./test.sh: line 39: [: cat: /proc/0/oom_score_adj: No such file or directory: integer expression expected [ INFO ] FAIL: qm containers oom_score_adj != 750. Current value is cat: /proc/0/oom_score_adj: No such file or directory Shared connection to 3.15.160.149 closed.

Its available from here: https://artifacts.dev.testing-farm.io/38432c5c-1e7d-470e-a737-02109ab30c6e/

solved.

nsednev commented 1 week ago

I still see these while running against testing-farm checkup tool: WARN[0010] StopSignal SIGTERM failed to stop container ffi-qm in 10 seconds, resorting to SIGKILL Deleted: 2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39 Trying to pull quay.io/centos-sig-automotive/ffi-tools:latest... Getting image source signatures Copying blob sha256:364b7f4a78417c35ed3d5f4785cbe2b34f4f1f552a7e01655c1c9c5f7b6e5f61 Copying blob sha256:4ca947be8ae2828258086eb666acaac2516cdbca60a8107cb6badb276a65e981 Copying config sha256:2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39 Writing manifest to image destination 2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39 Getting image source signatures Copying blob sha256:7555554ffea12f2e51f0dcf41e89523f58f790697442872907f9c2b6955e9ea2 Copying blob sha256:313b79904146885ddf6ce5104fc71cc7e081bfec070a48e3618fac00b6671127 Copying config sha256:2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39 Writing manifest to image destination Untagged: quay.io/centos-sig-automotive/ffi-tools:latest Deleted: 2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39 Getting image source signatures Writing manifest to image destination Error: OCI runtime error: crun: the requested cgroup controller pids is not available Retrieved QM_PID: 26154 Retrieved QM_FFI_PID: 0 Retrieved QM_OOM_SCORE_ADJ: '500' Retrieved QM_FFI_OOM_SCORE_ADJ: '/bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory ' PASS: qm.container oom_score_adj value == 500 ./test.sh: line 91: [[: /bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory : syntax error: operand expected (error token is "/bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory ") FAIL: qm containers oom_score_adj != 750. Current value is '/bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory ' Shared connection to 18.117.235.104 closed.

nsednev commented 1 week ago

I tested the code against local VM running with: CentOS Stream release 9 Linux ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com 5.14.0-503.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Aug 22 17:03:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux I took it from CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2.

I checked that on the VM there was no podman installed before testing it, then I ran the tmt like so: tmt -c distro=centos-stream-9 run -a provision --how connect -u root -p ${PASSWORD} -P ${PORT} -g localhost plans -n /plans/e2e/tier-0

The result was: stdout (8/8 lines)

        [ INFO  ] Starting setup
        [ INFO  ] ==============================

        [ INFO  ] Check if qm requires additional partition
        [ INFO  ] ==============================

        [ INFO  ] Checking if QM already installed
        [ INFO  ] ==============================
    stderr (100/103 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ++ cd -- .
    ++ pwd
    + SCRIPT_DIR=/var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/utils
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/container
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/systemd
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/tests
    ++ NODES_FOR_TESTING_ARR='control qm-node1'
    ++ readarray -d ' ' -t NODES_FOR_TESTING
    ++ CONTROL_CONTAINER_NAME=control
    ++ WAIT_BLUECHI_AGENT_CONNECT=5
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/diskutils
    + export CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + export REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + export WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + export CONTROL_CONTAINER_NAME=control
    + CONTROL_CONTAINER_NAME=control
    + NODES_FOR_TESTING=('control' 'node1')
    + export NODES_FOR_TESTING
    + export IP_CONTROL_MACHINE=
    + IP_CONTROL_MACHINE=
    + export CONTAINER_CAP_ADD=
    + CONTAINER_CAP_ADD=
    + export ARCH=
    + ARCH=
    + export DISK=
    + DISK=
    + export PART_ID=
    + PART_ID=
    + export QC_SOC=SA8775P
    + QC_SOC=SA8775P
    + export SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + export QC_SOC_DISK=sde
    + QC_SOC_DISK=sde
    + export BUILD_BLUECHI_FROM_GH_URL=
    + BUILD_BLUECHI_FROM_GH_URL=
    + export QM_GH_URL=
    + QM_GH_URL=
    + export BRANCH_QM=
    + BRANCH_QM=
    + export SET_QM_PART=
    + SET_QM_PART=
    + export USE_QM_COPR=rhcontainerbot/qm
    + USE_QM_COPR=rhcontainerbot/qm
    + RED='\033[91m'
    + GRN='\033[92m'
    + CLR='\033[0m'
    + ARGUMENT_LIST=("qm-setup-from-gh-url" "branch-qm" "set-qm-disk-part" "use-qm-copr")
    +++ printf help,%s:, qm-setup-from-gh-url branch-qm set-qm-disk-part use-qm-copr
    +++ basename ./set-ffi-env-e2e
    ++ getopt --longoptions help,qm-setup-from-gh-url:,help,branch-qm:,help,set-qm-disk-part:,help,use-qm-copr:, --name set-ffi-env-e2e --options '' -- none
    + opts=' -- '\''none'\'''
    + eval set '-- -- '\''none'\'''
    ++ set -- -- none
    + '[' 2 -gt 0 ']'
    + case "$1" in
    + break
    + info_message 'Starting setup'
    + '[' -z 'Starting setup' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Starting setup'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' 0 -ne 0 ']'
    + stat /run/ostree-booted
    + echo
    + info_message 'Check if qm requires additional partition'
    + '[' -z 'Check if qm requires additional partition' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Check if qm requires additional partition'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' -n '' ']'
    + echo
    + info_message 'Checking if QM already installed'
    + '[' -z 'Checking if QM already installed' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Checking if QM already installed'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    ++ rpm -qa qm
    + QM_INST=
    ++ systemctl is-enabled qm
    + QM_STATUS='Failed to get unit file state for qm.service: No such file or directory'
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

On VM I checked that it installed podman-5.2.2-1.el9.x86_64 and had not ran the QM at all: [root@ibm-p8-kvm-03-guest-02 ~]# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@ibm-p8-kvm-03-guest-02 ~]#

nsednev commented 1 week ago

Tested on distro: CentOS Stream 9: Linux ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com 5.14.0-503.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Aug 22 17:03:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2

No podman and qm installed on clean and fresh OS: [root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa podman [root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa qm [root@ibm-p8-kvm-03-guest-02 ~]#

Running tmt tier-0 against VM:

    multihost name: default-0
    arch: x86_64
    distro: CentOS Stream 9

    summary: 1 guest provisioned
prepare
    queued push task #1: push to default-0

    push task #1: push to default-0

    queued prepare task #1: Install podman on default-0
    queued prepare task #2: Set QM environment on default-0
    queued prepare task #3: requires on default-0

    prepare task #1: Install podman on default-0
    how: install
    name: Install podman
    package: podman

    prepare task #2: Set QM environment on default-0
    how: shell
    name: Set QM environment
    overview: 1 script found

    prepare task #3: requires on default-0
    how: install
    summary: Install required packages
    name: requires
    where: default-0
    package: /usr/bin/flock

    queued pull task #1: pull from default-0

    pull task #1: pull from default-0

    summary: 3 preparations applied
execute
    queued execute task #1: default-0 on default-0

    execute task #1: default-0 on default-0
    how: tmt
    progress:                                                           

    summary: 6 tests executed
report
    how: junit
    output: /var/tmp/tmt/run-010/plans/e2e/tier-0/report/default-0/junit.xml
    summary: 6 tests passed
finish

    summary: 0 tasks completed

total: 6 tests passed

On VM I see: [root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa podman podman-5.2.2-1.el9.x86_64 [root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa qm qm-0.6.5-1.20240903182315916484.main.77.g64fc09a.el9.noarch

[root@ibm-p8-kvm-03-guest-02 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 717d6c2cfa7c /sbin/init 6 minutes ago Up 6 minutes qm