southalc / podman

Puppet module for podman
Apache License 2.0
13 stars 30 forks source link

Bug with Podman Container removal? #14

Closed Francommit closed 3 years ago

Francommit commented 3 years ago

Hey mate - me again,

I've been testing and testing and testing different combinations with this module. I did get it fully functional yesterday - and I cannot, for the life of me figure out what I did (bunch of uncommitted things).

I've tried to clean everything up and I'm having problems with the code running and trying to remove the systemctl-service. If I have a clean Redhat 8 box and spin it up with the following hiera config:

---
podman::containers:
  primary-solace:
    image: 'solace/solace-pubsub-standard'
    flags:
      publish:
        - '8080:8080'
        - '50000:50000'
        - '8080:8080'
        - '55555:55555'
        - '55443:55443'
        - '55556:55556'
        - '55003:55003'
        - '2222:2222'
        - '8300:8300'
        - '8301:8301'
        - '8302:8302'
        - '8741:8741'
        - '8303:8303'
      env:
       - 'username_admin_globalaccesslevel="admin"'
       - 'username_admin_password="admin"'
      shm-size:
       - '1g'
    service_flags:
      timeout: '60'

It's spitting out:

Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[verify_container_flags_primary-solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[verify_container_image_primary-solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]/returns: Failed to stop podman-primary-solace.service: Unit podman-primary-solace.service not loaded.
 found: no such containeran/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]/returns: Error: no container with name or ID primary-solace
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]/returns: Error: failed to evict container: "": failed to find container "primary-s found: no such containertainer with name or ID primary-solace
Error: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]: Failed to call refresh: 'systemctl  stop podman-primary-solace || podman container stop --time 60 primary-solace
podman container rm --force primary-solace
' returned 1 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]: 'systemctl  stop podman-primary-solace || podman container stop --time 60 primary-solace
podman container rm --force primary-solace
' returned 1 instead of one of [0]
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_image_primary-solace]: Dependency Exec[podman_remove_container_primary-solace] has failures: true
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_image_primary-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_create_primary-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_generate_service_primary-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Service[podman-primary-solace]: Skipping because of failed dependencies

If I butcher the container.pp file I can get it to create the service, but then it obviously won't re-create it:

# @summary manage podman container and register as a systemd service
#
# @param image
#   Container registry source of the image being deployed.  Required when
#   `ensure` is `present` but optional when `ensure` is set to `absent`.
#
# @param user
#   Optional user for running rootless containers.  For rootless containers,
#   the user must also be defined as a puppet resource that includes at least
#   'uid', 'gid', and 'home' attributes.
#
# @param flags
#   All flags for the 'podman container create' command are supported via the
#   'flags' hash parameter, using only the long form of the flag name.  The
#   container name will be set as the resource name (namevar) unless the 'name'
#   flag is included in the flags hash.  If the flags for a container resource
#   are modified the container will be destroyed and re-deployed during the
#   next puppet run.  This is achieved by storing the complete set of flags as
#   a base64 encoded string in a container label named `puppet_resource_flags`
#   so it can be compared with the assigned resource state.
#   Flags that can be used more than once should be expressed as an array.  For
#   flags which take no arguments, set the hash value to be undef. In the
#   YAML representation you can use `~` or `null` as the value.
#
# @param service_flags
#   When a container is created, a systemd unit file for the container service
#   is generated using the 'podman generate systemd' command.  All flags for the
#   command are supported using the 'service_flags" hash parameter, again using
#   only the long form of the flag names.
#
# @param command
#   Optional command to be used as the container entry point.
#
# @param ensure
#   Valid values are 'present' or 'absent'
#
# @param enable
#   Status of the automatically generated systemd service for the container.
#   Valid values are 'running' or 'stopped'.
#
# @param update
#   When `true`, the container will be redeployed when a new container image is
#   detected in the container registry.  This is done by comparing the digest
#   value of the running container image with the digest of the registry image.
#   When `false`, the container will only be redeployed when the declared state
#   of the puppet resource is changed.
#
# @example
#   podman::container { 'jenkins':
#     image         => 'docker.io/jenkins/jenkins',
#     user          => 'jenkins',
#     flags         => {
#                      publish => [
#                                 '8080:8080',
#                                 '50000:50000',
#                                 ],
#                      volume  => 'jenkins:/var/jenkins_home',
#                      },
#     service_flags => { timeout => '60' },
#   }
#
define podman::container (
  String $image       = '',
  String $user        = '',
  Hash $flags         = {},
  Hash $service_flags = {},
  String $command     = '',
  String $ensure      = 'present',
  Boolean $enable     = true,
  Boolean $update     = true,
){
  #require podman::install

  # Add a label of base64 encoded flags defined for the container resource
  # This will be used to determine when the resource state is changed
  $flags_base64 = base64('encode', inline_template('<%= @flags.to_s %>')).chomp()

  # Add the default name and a custom label using the base64 encoded flags
  if has_key($flags, 'label') {
    $label = [] + $flags['label'] + "puppet_resource_flags=${flags_base64}"
    $no_label = $flags.delete('label')
  } else {
    $label = "puppet_resource_flags=${flags_base64}"
    $no_label = $flags
  }

  # If a container name is not set, use the Puppet resource name
  $merged_flags = merge({ name => $title, label => $label}, $no_label )
  $container_name = $merged_flags['name']

  # A rootless container will run as the defined user
  if $user != '' {
    ensure_resource('podman::rootless', $user, {})
    $systemctl = 'systemctl --user '

    # The handle is used to ensure resources have unique names
    $handle = "${user}-${container_name}"

    # Set default execution environment for the rootless user
    $exec_defaults = {
      path        => '/sbin:/usr/sbin:/bin:/usr/bin',
      environment => [
        "HOME=${User[$user]['home']}",
        "XDG_RUNTIME_DIR=/run/user/${User[$user]['uid']}",
      ],
      cwd         => User[$user]['home'],
      user        => $user,
    }
    $requires = [
      Podman::Rootless[$user],
      Service['systemd-logind'],
    ]
    $service_unit_file ="${User[$user]['home']}/.config/systemd/user/podman-${container_name}.service"

    # Reload systemd when service files are updated
    ensure_resource('Exec', "podman_systemd_${user}_reload", {
        path        => '/sbin:/usr/sbin:/bin:/usr/bin',
        command     => "${systemctl} daemon-reload",
        refreshonly => true,
        environment => [
          "HOME=${User[$user]['home']}",
          "XDG_RUNTIME_DIR=/run/user/${User[$user]['uid']}",
        ],
        cwd         => User[$user]['home'],
        provider    => 'shell',
        user        => $user,
      }
    )
    $_podman_systemd_reload = Exec["podman_systemd_${user}_reload"]
  } else {
    $systemctl = 'systemctl '
    $handle = $container_name
    $service_unit_file = "/etc/systemd/system/podman-${container_name}.service"
    $exec_defaults = {
      path        => '/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin',
    }

    # Reload systemd when service files are updated
    ensure_resource('Exec', 'podman_systemd_reload', {
        path        => '/sbin:/usr/sbin:/bin:/usr/bin',
        command     => "${systemctl} daemon-reload",
        refreshonly => true,
      }
    )
    $requires = []
    $_podman_systemd_reload = Exec['podman_systemd_reload']
  }

  case $ensure {
    'present': {
      if $image == '' { fail('A source image is required') }

      # Detect changes to the defined podman flags and re-deploy if needed
      Exec { "verify_container_flags_${handle}":
        command  => 'true',
        provider => 'shell',
        unless   => @("END"/$L),
                   if podman container exists ${container_name}
                     then
                     saved_resource_flags="\$(podman container inspect ${container_name} \
                       --format '{{.Config.Labels.puppet_resource_flags}}' | tr -d '\n')"
                     current_resource_flags="\$(echo '${flags_base64}' | tr -d '\n')"
                     test "\${saved_resource_flags}" = "\${current_resource_flags}"
                   fi
                   |END
        # notify   => Exec["podman_remove_container_${handle}"],
        require  => $requires,
        *        => $exec_defaults,
      }

      # Re-deploy when $update is true and the container image has been updated
      if $update {
        Exec { "verify_container_image_${handle}":
          command  => 'true',
          provider => 'shell',
          unless   => @("END"/$L),
            if podman container exists ${container_name}
              then
              image_name=\$(podman container inspect ${container_name} --format '{{.ImageName}}')
              running_digest=\$(podman image inspect \${image_name} --format '{{.Digest}}')
              latest_digest=\$(skopeo inspect docker://\${image_name} | \
                /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
              [[ $? -ne 0 ]] && latest_digest=\$(skopeo inspect --no-creds docker://\${image_name} | \
                /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
              test -z "\${latest_digest}" && exit 0     # Do not update if unable to get latest digest
              test "\${running_digest}" = "\${latest_digest}"
            fi
            |END
          # notify   => [
          #   Exec["podman_remove_image_${handle}"],
          #   Exec["podman_remove_container_${handle}"],
          # ],
          require  => $requires,
          *        => $exec_defaults,
        }
      } else {
        # Re-deploy when $update is false but the resource image has changed
        Exec { "verify_container_image_${handle}":
          command  => 'true',
          provider => 'shell',
          unless   => @("END"/$L),
            if podman container exists ${container_name}
              then
              running=\$(podman container inspect ${container_name} --format '{{.ImageName}}' | awk -F/ '{print \$NF}')
              declared=\$(echo "${image}" | awk -F/ '{print \$NF}')
              available=\$(skopeo inspect docker://${image} | \
                /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Name"]')
              test -z "\${available}" && exit 0     # Do not update update if unable to get the new image
              test "\${running}" = "\${declared}"
            fi
            |END
          notify   => [
            Exec["podman_remove_image_${handle}"],
            Exec["podman_remove_container_${handle}"],
          ],
          require  => $requires,
          *        => $exec_defaults,
        }
      }

      # Exec { "podman_remove_image_${handle}":
      #   # Try to remove the image, but exit with success regardless
      #   provider    => 'shell',
      #   command     => "podman rmi ${image} || exit 0",
      #   refreshonly => true,
      #   notify      => Exec["podman_create_${handle}"],
      #   require     => [ $requires, Exec["podman_remove_container_${handle}"]],
      #   *           => $exec_defaults,
      # }

      # Exec { "podman_remove_container_${handle}":
      #   # Try nicely to stop the container, but then insist
      #   provider    => 'shell',
      #   command     => @("END"/L),
      #                  ${systemctl} stop podman-${container_name} || podman container stop --time 60 ${container_name}
      #                  podman container rm --force ${container_name}
      #                  |END
      #   refreshonly => true,
      #   notify      => Exec["podman_create_${handle}"],
      #   require     => $requires,
      #   *           => $exec_defaults,
      # }

      # Convert $merged_flags hash to usable command arguments
      $_flags = $merged_flags.reduce('') |$mem, $flag| {
        if $flag[1] =~ String {
          "${mem} --${flag[0]} '${flag[1]}'"
        } elsif $flag[1] =~ Undef {
          "${mem} --${flag[0]}"
        } else {
          $dup = $flag[1].reduce('') |$mem2, $value| {
            "${mem2} --${flag[0]} '${value}'"
          }
          "${mem} ${dup}"
        }
      }

      # Convert $service_flags hash to command arguments
      $_service_flags = $service_flags.reduce('') |$mem, $flag| {
        "${mem} --${flag[0]} '${flag[1]}'"
      }

      Exec { "podman_create_${handle}":
        command => "podman container create ${_flags} ${image} ${command}",
        unless  => "podman container exists ${container_name}",
        notify  => Exec["podman_generate_service_${handle}"],
        require => $requires,
        *       => $exec_defaults,
      }

      if $user != '' {
        Exec { "podman_generate_service_${handle}":
          command     => "podman generate systemd ${_service_flags} ${container_name} > ${service_unit_file}",
          refreshonly => true,
          notify      => Exec["service_podman_${handle}"],
          require     => $requires,
          *           => $exec_defaults,
        }

        # Work-around for managing user systemd services
        if $enable { $action = 'start'; $startup = 'enable' }
          else { $action = 'stop'; $startup = 'disable'
        }
        Exec { "service_podman_${handle}":
          command => @("END"/L),
                     ${systemctl} ${startup} podman-${container_name}.service
                     ${systemctl} ${action} podman-${container_name}.service
                     |END
          unless  => @("END"/L),
                     ${systemctl} is-active podman-${container_name}.service && \
                       ${systemctl} is-enabled podman-${container_name}.service
                     |END
          require => $requires,
          *       => $exec_defaults,
        }
      }
      else {
        Exec { "podman_generate_service_${handle}":
          path        => '/sbin:/usr/sbin:/bin:/usr/bin',
          command     => "podman generate systemd ${_service_flags} ${container_name} > ${service_unit_file}",
          refreshonly => true,
          notify      => Service["podman-${handle}"],
        }

        # Configure the container service per parameters
        if $enable { $state = 'running'; $startup = 'true' }
          else { $state = 'stopped'; $startup = 'false'
        }
        Service { "podman-${handle}":
          ensure => $state,
          enable => $startup,
        }
      }
    }

    'absent': {
      Exec { "service_podman_${handle}":
        command => @("END"/L),
                   ${systemctl} stop podman-${container_name}
                   ${systemctl} disable podman-${container_name}
                   |END
        onlyif  => @("END"/$L),
                   test "\$(${systemctl} is-active podman-${container_name} 2>&1)" = "active" -o \
                     "\$(${systemctl} is-enabled podman-${container_name} 2>&1)" = "enabled"
                   |END
        notify  => Exec["podman_remove_container_${handle}"],
        require => $requires,
        *       => $exec_defaults,
      }

      Exec { "podman_remove_container_${handle}":
        # Try nicely to stop the container, but then insist
        command => "podman container rm --force ${container_name}",
        unless  => "podman container exists ${container_name}; test $? -eq 1",
        notify  => Exec["podman_remove_image_${handle}"],
        require => $requires,
        *       => $exec_defaults,
      }

      Exec { "podman_remove_image_${handle}":
        # Try to remove the image, but exit with success regardless
        provider    => 'shell',
        command     => "podman rmi ${image} || exit 0",
        refreshonly => true,
        require     => [ $requires, Exec["podman_remove_container_${handle}"]],
        *           => $exec_defaults,
      }

      File { $service_unit_file:
        ensure  => absent,
        require => [
          $requires,
          Exec["service_podman_${handle}"],
        ],
        notify  => $_podman_systemd_reload,
      }
    }

    default: {
      fail('"ensure" must be "present" or "absent"')
    }
  }
}

I'm going to continue to play with it, but surely this is something you've come across? It's doing my head in - thanks for your hard work!

Francommit commented 3 years ago

Ok it's the two verify statements, I just think they're blowing up for some reason, Without the notifys it partially runs:

 if $image == '' { fail('A source image is required') }

      # Detect changes to the defined podman flags and re-deploy if needed
      Exec { "verify_container_flags_${handle}":
        command  => 'true',
        provider => 'shell',
        unless   => @("END"/$L),
                   if podman container exists ${container_name}
                     then
                     saved_resource_flags="\$(podman container inspect ${container_name} \
                       --format '{{.Config.Labels.puppet_resource_flags}}' | tr -d '\n')"
                     current_resource_flags="\$(echo '${flags_base64}' | tr -d '\n')"
                     test "\${saved_resource_flags}" = "\${current_resource_flags}"
                   fi
                   |END
        # notify   => Exec["podman_remove_container_${handle}"],
        require  => $requires,
        *        => $exec_defaults,
      }

      # Re-deploy when $update is true and the container image has been updated
      if $update {
        Exec { "verify_container_image_${handle}":
          command  => 'true',
          provider => 'shell',
          unless   => @("END"/$L),
            if podman container exists ${container_name}
              then
              image_name=\$(podman container inspect ${container_name} --format '{{.ImageName}}')
              running_digest=\$(podman image inspect \${image_name} --format '{{.Digest}}')
              latest_digest=\$(skopeo inspect docker://\${image_name} | \
                /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
              [[ $? -ne 0 ]] && latest_digest=\$(skopeo inspect --no-creds docker://\${image_name} | \
                /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
              test -z "\${latest_digest}" && exit 0     # Do not update if unable to get latest digest
              test "\${running_digest}" = "\${latest_digest}"
            fi
            |END
          # notify   => [
          #   Exec["podman_remove_image_${handle}"],
          #   Exec["podman_remove_container_${handle}"],
          # ],
          require  => $requires,
          *        => $exec_defaults,
        }
      }
Francommit commented 3 years ago

Ok remove image works

Exec["podman_remove_image_${handle}"],

I think it's

Exec["podman_remove_container_${handle}"],

That's blowing up.

Francommit commented 3 years ago

Going through the debug logs I see the following:

Debug: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[verify_container_flags_primary-solace]/unless: /bin/sh: -c: line 24: syntax error: unexpected end of file

Which is from:

Debug: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[verify_container_flags_-primary-solace]/unless: /bin/sh: -c: line 24: syntax error: unexpected end of file
Debug: Exec[verify_container_flags_primary-solace](provider=shell): Executing '["/bin/sh", "-c", "true"]'
Debug: Executing: '/bin/sh -c true'

I'm just not super sure where it's coming from

Francommit commented 3 years ago

@southalc , if I use 0.24 I have no problems at all, so the bugs been introduced between then and now.

southalc commented 3 years ago

The error looks like it's failing on the "unless" execution statement from here: https://github.com/southalc/podman/blob/ad1519e078bfe5064d906b3f7e18508cd889944c/manifests/container.pp#L157

What it's doing is converting the container flags from the defined puppet resource to a base64 encoded string, then checking that string against the running container's tag named "puppet_resource_flags" that was set when the container was created. I tested with a simple change of the container resource flags and observed my test container get re-deployed successfully.

At this point I spun up a new RHEL8 VM to test the configuration you submitted, but I am unable to reproduce the issue as my container deploys correctly and re-deploys if I change the resource flags. The error "unexpected end of file" makes me wonder if the heredoc is being parsed correctly? Unfortunately even debug output is not returning the actual command that was being executed by the "unless". What version of the Puppet agent are you using?

BTW, you should be able to clean up a container deployment with something like this from hiera:

podman::containers:
  primary-solace:
    ensure: absent
Francommit commented 3 years ago

Hey thanks for the reply. Been doing other work but I revisited this as I found I needed to run podman as a user and not as root. I've gotten the process a little bit further but it's still falling over at a different spot now when I'm specifying a user.

So, the following logs are from a fresh Redhat Puppet run.

Notice: /Stage[main]/Types/Types::Type[group]/Group[solace]/ensure: created
Notice: /Stage[main]/Types/Types::Type[user]/User[solace]/ensure: created
Notice: /Stage[main]/Role::Solace_monitor/File[solace_home_directory]/ensure: created
Notice: /Stage[main]/Podman::Install/Concat[/etc/subuid]/File[/etc/subuid]/content: content changed '{md5}2075cf9f804d83c3ad908c95202455d7' to '{md5}6d789a6665985785c5e045a2ad91ed59'
Notice: /Stage[main]/Podman::Install/Concat[/etc/subgid]/File[/etc/subgid]/content: content changed '{md5}2075cf9f804d83c3ad908c95202455d7' to '{md5}6d789a6665985785c5e045a2ad91ed59'
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Podman::Rootless[solace]/Exec[loginctl_linger_solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Podman::Rootless[solace]/File[/home/solace/.config]/ensure: created
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Podman::Rootless[solace]/File[/home/solace/.config/systemd]/ensure: created
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Podman::Rootless[solace]/File[/home/solace/.config/systemd/user]/ensure: created
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[verify_container_flags_solace-st-monitor-solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[verify_container_image_solace-st-monitor-solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_remove_container_solace-st-monitor-solace]/returns: Failed to stop podman-st-monitor-solace.service: Unit podman-st-monitor-solace.service not loaded.
 found: no such containeran/Podman::Container[st-monitor-solace]/Exec[podman_remove_container_solace-st-monitor-solace]/returns: Error: no container with name or ID st-monitor-solace
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_remove_container_solace-st-monitor-solace]/returns: Error: failed to evict container: "": failed to find container "st-monitor-sola found: no such containerner with name or ID st-monitor-solace
Error: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_remove_container_solace-st-monitor-solace]: Failed to call refresh: 'systemctl --user  stop podman-st-monitor-solace || podman container stop --time 60 st-monitor-solace
podman container rm --force st-monitor-solace
' returned 1 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_remove_container_solace-st-monitor-solace]: 'systemctl --user  stop podman-st-monitor-solace || podman container stop --time 60 st-monitor-solace
podman container rm --force st-monitor-solace
' returned 1 instead of one of [0]
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_remove_image_solace-st-monitor-solace]: Dependency Exec[podman_remove_container_solace-st-monitor-solace] has failures: true
Warning: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_remove_image_solace-st-monitor-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_create_solace-st-monitor-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_generate_service_solace-st-monitor-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]: Skipping because of failed dependencies
Notice: Applied catalog in 289.99 seconds

My config is as follows:

I've got this in a profile as I couldn't get the types to create the home directory:

   file { 'solace_home_directory':
    ensure  => directory,
    path    => '/home/solace',
    owner   => 'solace',
    group   => 'solace',
    mode    => '0644',
    purge   => false
  }

  include types
  include podman

I assume the types module, you're using your own which is listed on the forge. For my yaml, I've setup the following:

---
types::user:
  solace:
    ensure: present
    forcelocal: true
    uid:  222001
    gid:  222001
    password: 'WIK@LH$#I#$#IUH'
    home: /home/solace

types::group:
  solace:
    ensure: present
    forcelocal: true
    gid:  222001

podman::manage_subuid: true
podman::subid:
  '222001':
    subuid: 12300000
    count: 65535

podman::containers:
  solace-host:
    user: solace
    image: 'solace/solace-pubsub-standard'
    flags:
      publish:
        - '8080:8080'
        - '50000:50000'
      env:
       - 'username_admin_globalaccesslevel="admin"'
       - 'username_admin_password="admin"'
      shm-size:
       - '1g'
    service_flags:
      timeout: '960'

I cannot get it to work at all on the latest version, it doesn't even generate the files in /home/user/.config/systemd/user/

In the previous version of the module https://forge.puppet.com/modules/southalc/podman/changelog#release-023 I can get it further, it's I can see it's downloading the docker image (it's 1Gb so it takes a while and I can see it in the users podman image store)

This is where I get to when I use the previous version with 0.24:

Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_create_solace-st-monitor-solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[podman_generate_service_solace-st-monitor-solace]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: Created symlink /home/solace/.config/systemd/user/multi-user.target.wants/podman-st-monitor-solace.service → /home/solace/.config/systemd/user/podman-st-monitor-solace.service.
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: Created symlink /home/solace/.config/systemd/user/default.target.wants/podman-st-monitor-solace.service → /home/solace/.config/systemd/user/podman-st-monitor-solace.service.
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: Job for podman-st-monitor-solace.service failed because the control process exited with error code.
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: See "systemctl --user status podman-st-monitor-solace.service" and "journalctl --user -xe" for details.
Error: 'systemctl --user  enable podman-st-monitor-solace.service
systemctl --user  start podman-st-monitor-solace.service
' returned 1 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: change from 'notrun' to ['0'] failed: 'systemctl --user  enable podman-st-monitor-solace.service
systemctl --user  start podman-st-monitor-solace.service
' returned 1 instead of one of [0]
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: Job for podman-st-monitor-solace.service failed because the control process exited with error code.
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: See "systemctl --user status podman-st-monitor-solace.service" and "journalctl --user -xe" for details.
Error: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]: Failed to call refresh: 'systemctl --user  enable podman-st-monitor-solace.service
systemctl --user  start podman-st-monitor-solace.service
' returned 1 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]: 'systemctl --user  enable podman-st-monitor-solace.service
systemctl --user  start podman-st-monitor-solace.service
' returned 1 instead of one of [0]
Notice: Applied catalog in 5.59 seconds

I just don't know what's causing this:

Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: Job for podman-st-monitor-solace.service failed because the control process exited with error code.
Notice: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: See "systemctl --user status podman-st-monitor-solace.service" and "journalctl --user -xe" for details.
Error: 'systemctl --user  enable podman-st-monitor-solace.service
systemctl --user  start podman-st-monitor-solace.service
' returned 1 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Container[st-monitor-solace]/Exec[service_podman_solace-st-monitor-solace]/returns: change from 'notrun' to ['0'] failed: 'systemctl --user  enable podman-st-monitor-solace.service
systemctl --user  start podman-st-monitor-solace.service
' returned 1 instead of one of [0]
Francommit commented 3 years ago

After deploying on our actual redhat servers with PE running (not my hacked localhost version) it's all working as intended, thanks again for the great work.