Open TJM opened 6 years ago
I ran into something similar today, and the cause was a missing /etc/yum/vars/awsregion
file.
Ah, it does have something to do with AWS? These settings are AWS defaults for RHEL. I know this may sound crazy, but I have RHEL7 VMs that are not in AWS :p
I know this may sound crazy, but RHEL-on-AWS should detect as a different OS from RHEL-not-on-AWS.
(winky-grin)
Agreed! I am a bit curious... wouldn't a variable need to be formatted like $awsregion
(perhaps with the curly braces)? I am certain with all the available AWS facts we ought to be able to figure something out. Unfortunately, I am currently working with a customer that cannot use AWS, so I will have to bow out. For now, I just set yum::manage_os_default_repos: false
for os->RedHat, and we'll have to create the entries manually.
~tommy
I agree with @TJM, this is a little odd. I'm going to do some digging into it to see why it's happening.
The module hiera config file just uses os.family and os.release* to determine which default repos to include. Obviously trying to figure out if a node is in ec2 in hiera isn't simple, but I don't think assuming it is and laying down repos which don't work isn't the right way to go.
Obviously trying to manage the default repos on RHEL won't work unless the node registered through RHSM.
@pillarsdotnet could you share any background on the changes made in #75 and #76 ?
For me, this was resolived by installing the rh-amazon-rhui-client rpm. Because I could not do anything with dnf (yum), I could not install it with "dnf install rh-amazon-rhui-client". I had to go to a working server, run "dnf download rh-amazon-rhui-client", which downloaded the file rh-amazon-rhui-client-4.0.14-1.el8.noarch.rpm. I copied that file to the broken server and ran "rpm -Uvh rh-amazon-rhui-client-4.0.14-1.el8.noarch.rpm" to install it. Then, when I ran "dnf update", it reconfigured itself and began working.
@albatrossflavour
Sorry; at the time I wrote those patches, I was working AWS-only.
But "trying to figure out if a node is in ec2 in hiera" is pretty simple. All you need is a fact that resolves one way for ec2 and a different way for non-ec2. I've written such a fact in a proprietary setting. Here's what the top-level looks like:
Facter.add(:cloud_provider) do
setcode do
if (Facter.value(:az_metadata)
'azure'
elsif Facter.value(:ec2_metadata)
'aws'
elsif Facter.value(:gce)
'gcp'
elsif sanity_check_our_own_onprem_servers()
'onprem'
else
Facter.warn('Could not determine cloud_provider fact.')
nil
end
end
end
Affected Puppet, Ruby, OS and module versions/distributions
How to reproduce (e.g Puppet code you use)
What are you seeing
Errors during yum operations:
What behaviour did you expect instead
I think the defaults provided through module data should work as a general rule. Perhaps, if they are "Amazon" specific defaults, we should add a layer to the hiera.yaml to detect the amazon-ness and include these defaults then? or come up with a generic default, and only override the "mirrorlist"
Output log
Relevant information included above.
Any additional information you'd like to impart
This relates to PR #75 and Issue #76 ... Perhaps @pillarsdotnet can shed some light on how it is supposed to work.