Open ghost opened 5 years ago
Hi @steven-cuthill-otm sorry for the delay, I am traveling and have limited internet connectivity.
Are you able to run the playbooks with verbose (-vv or -vvv) and check the task execution log? First thing I would check is which task ran last before giving you the error message.
Hello, I had the same issue with an Amazon ECS-optimized Amazon Linux 2 AMI. Did you find something? My exlusions are: cis_level_1_exclusions:
Could not solve the issues so just moved back to aws linux v1 for now until there is better support for the OS in these playbooks.
On Sat, 16 Feb 2019 at 09:46, OlivierGaillard notifications@github.com wrote:
Hello, I had the same issue with an Amazon ECS-optimized Amazon Linux 2 AMI. Did you find something? My exlusions are: cis_level_1_exclusions:
- 5.4.4
- 3.4.2
- 3.4.3
- 6.2.13
- 1.1.18
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/anthcourtney/ansible-role-cis-amazon-linux/issues/53#issuecomment-464326235, or mute the thread https://github.com/notifications/unsubscribe-auth/AoL4Zn5AJj7N1zKdTCz07TvGvKPeiF5-ks5vN9NegaJpZM4Zp0mN .
--
Steven Cuthill DevOps Manager
steven.cuthill@onthemarket.com www.onthemarket.com
Download the OnTheMarket.com app...
https://itunes.apple.com/gb/app/onthemarket.com-property-search/id960416200?mt=8 https://play.google.com/store/apps/details?id=com.onthemarket.mobile&hl=en_GB
Follow us on...
https://www.facebook.com/Onthemarketcom-1500133890261960/ https://twitter.com/OnTheMarketCom https://www.linkedin.com/company/onthemarket
I think I had a very similar issue today. While testing locally using vagrant, the vagrant account got locked. It seems the culprit is 5.4.1.4 (Ensure inactive password lock is 30 days or less) which will lock all accounts which have passed more than 30 days since their last password change require date.
Example
[vagrant@localhost ~]$ sudo cut -f 1 -d: /etc/passwd | xargs -n 1 -I {} bash -c " echo -e '\n{}' ; sudo chage -l {}"
root
Last password change : Apr 05, 2017
Password expires : Jul 04, 2017
Password inactive : Aug 03, 2017
Account expires : never
Minimum number of days between password change : 7
Maximum number of days between password change : 90
Number of days of warning before password expires : 7
@steven-cuthill-otm, @OlivierGaillard could you please test with 5.4.1.1 through 5.4.1.4 excluded and let us know if the issue still persist.
@chandanchowdhury I found the problem to be with 1.1.11
, 1.1.12
, or 1.1.13
any update on this?
@nebffa I can't find any reason why creating separate partition would lock root account, may be I am missing something, would be great if you can provide some explanation.
I am facing this issue, whenever passing the file system name to make it permanent in /etc/fstab
Is there any more news on this?
I'm using "EC2 Image Builder" to build an image, I've enabled the stig-build-linux-high/2.8.0
component (which basically does the CIS hardening on the image) as well as some of my own.
I do not have any individual partitions, nor do I have any account lockout setup.
$ sudo cut -f 1 -d: /etc/passwd | xargs -n 1 -I {} bash -c " echo -e '\n{}' ; sudo chage -l {}" | egrep 'Password expires|Account expires' | sort | uniq
Account expires : never
Password expires : never
The squashfs
and cramfs
filesystems are disabled though, not sure if those are used in AL2.
.
Looking at the log, I see (i-031475ad9b57ccb8b.log):
[ 3.498635] systemd-fstab-generator[1370]: Checking was requested for "fs-78e9acb2.efs.eu-west-1.amazonaws.com:/ctc/CAASQA", but it is not a device.
[ 3.505792] systemd-fstab-generator[1370]: Checking was requested for "fs-7be9acb1.efs.eu-west-1.amazonaws.com:/CAASQA", but it is not a device.
[ OK ] Reached target Local File Systems (Pre).
Mounting /var/tmp...
Mounting /mnt/CAASQA-dev...
Mounting /ctc...
[ 5.416268] XFS (nvme1n1): Mounting V5 Filesystem
Mounting /mnt/downloads...
[ OK ] Mounted /var/tmp.
[FAILED] Failed to mount /mnt/downloads.
See 'systemctl status mnt-downloads.mount' for details.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Mark the need to relabel after reboot.
[DEPEND] Dependency failed for Migrate local... structure to the new structure.
[DEPEND] Dependency failed for Relabel all filesystems, if necessary.
[ 7.085096] kauditd_printk_skb: 57 callbacks suppressed
[ 7.085097] audit: type=1130 audit(1606351139.792:66): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=emergency comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 7.085098] audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64
[ 7.085099] audit: kauditd hold queue overflow
[FAILED] Failed to mount /mnt/CAASQA-dev.
See 'systemctl status "mnt-CAASQA\\x2ddev.mount"' for details.
(the /mnt
mounts are EFS filesystems).
Also:
[ 6.842673] hibinit-agent[2103]: Traceback (most recent call last):
[ 6.843649] hibinit-agent[2103]: File "/usr/bin/hibinit-agent", line 496, in <module>
[ 6.844861] hibinit-agent[2103]: main()
[ 6.846246] hibinit-agent[2103]: File "/usr/bin/hibinit-agent", line 435, in main
[ 6.847129] hibinit-agent[2103]: if not hibernation_enabled(config.state_dir):
[ 6.850515] hibinit-agent[2103]: File "/usr/bin/hibinit-agent", line 390, in hibernation_enabled
[ 6.851999] hibinit-agent[2103]: imds_token = get_imds_token()
[ 6.852788] hibinit-agent[2103]: File "/usr/bin/hibinit-agent", line 365, in get_imds_token
[ 6.854183] hibinit-agent[2103]: response = requests.put(token_url, headers=request_header)
[ 6.856457] hibinit-agent[2103]: File "/usr/lib/python2.7/site-packages/requests/api.py", line 121, in put
[ 6.861368] hibinit-agent[2103]: return request('put', url, data=data, **kwargs)
[ 6.862811] hibinit-agent[2103]: File "/usr/lib/python2.7/site-packages/requests/api.py", line 50, in request
[ 6.865533] hibinit-agent[2103]: response = session.request(method=method, url=url, **kwargs)
[ 6.867243] hibinit-agent[2103]: File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 486, in request
[ 6.870419] hibinit-agent[2103]: resp = self.send(prep, **send_kwargs)
[ 6.871850] hibinit-agent[2103]: File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 598, in send
[ 6.872071] hibinit-agent[2103]: r = adapter.send(request, **kwargs)
[FAILED] Failed to start Initial hibernation setup job.
See 'systemctl status hibinit-agent.service' for details.
[ 6.873398] hibinit-agent[2103]: File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 419, in send
[ 6.875001] hibinit-agent[2103]: raise ConnectTimeout(e, request=request)
[ 6.878100] hibinit-agent[2103]: requests.exceptions.ConnectTimeout: HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fabe4b6f150>: Failed to establish a new connection: [Errno 101] Network is unreachable',))
but the most concerning thing is:
[ 8.169688] cloud-init[2109]: Cloud-init v. 19.3-3.amzn2 running 'init' at Thu, 26 Nov 2020 00:39:01 +0000. Up 8.11 seconds.
[ 8.215431] cloud-init[2109]: ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++
[ 8.215696] cloud-init[2109]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[ 8.219957] cloud-init[2109]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
[ 8.221401] cloud-init[2109]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[ 8.226866] cloud-init[2109]: ci-info: | eth0 | False | . | . | . | 0a:17:aa:20:f6:43 |
[ 8.228287] cloud-init[2109]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . |
[ 8.231517] cloud-init[2109]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[ 8.232958] cloud-init[2109]: ci-info:
For some reason, it can't get an IP address! Not sure how that's possible, it works the first time it boots but not after a reboot (so there shouldn't be a problem with the iptables
hardening I've done)..
Logfiles from the first (successful) and the second (failed) boot:
Just an update before I take weekend - it's SOMETHING (!!) to do with the mounts. Not sure which one yet, but commenting out all but the root fs and it rebooted just fine. It also rebooted just fine when adding an additional disk mounted..
I'm thinking it's something with the EFS filesystem:
UUID=15c7809d-e6e3-4062-a5eb-afeb1939fc6e / xfs defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults,noexec,nodev,nosuid 0 0
tmpfs /var/tmp tmpfs defaults,nodev,nosuid 0 0
/dev/nvme1n1 /ctc xfs relatime,nofail 0 0
fs-78e9acb2.efs.eu-west-1.amazonaws.com:/ctc/CAASQA /mnt/CAASQA-dev efs defaults,vers=4.1,tls 0 2
fs-7be9acb1.efs.eu-west-1.amazonaws.com:/CAASQA /mnt/downloads efs defaults,vers=4.1,tls 0 2
I'm just going to test without EFS but with one of the tmpfs
on at a time to triple check. And probably without tls for EFS..
It seems the Amazon Linux 2 AMI I use as base mounts EFS/NFS mounts to early in the boot process, messing up the rest of it. So the solution was simple, just make sure network mounts is mounted later by adding a _netdev
option to the two EFS mount entries.
This is actually documented on the https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html page.
Wether that helps anyone else, I don't know but it was the cause of my root account is locked
problems..
I thinks for Amazon Linux 2, rule 1.1.2 - 1.1.14 cause that. In my case exception that rule make it works
Having same issue with Amazon Linux 2. Has there been any updates to this?
It seems the Amazon Linux 2 AMI I use as base mounts EFS/NFS mounts to early in the boot process, messing up the rest of it. So the solution was simple, just make sure network mounts is mounted later by adding a
_netdev
option to the two EFS mount entries. This is actually documented on the https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html page.Wether that helps anyone else, I don't know but it was the cause of my
root account is locked
problems..
How did you solve this? I can't connect on my instance (1/2 Status Checked) and I did try to edit etc/fstab for a consistent mount. I think we have the same problem. Did you create a new instance?
Having same issue with Amazon Linux 2. Has there been any updates to this?
Did you find a solution?
Hello,
been doing some testing on AWS linux 2 LTS and come across an issues that is stopping the image from booting. looks like the root account is getting disabled so this is stopping the init process from finishing to the point where we cant connect, so litter hard to get any more logging info. from the 'get sys log' option in EC2 i managed to pull the following :
Cannot open access to console, the root account is locked.
See sulogin(8) man page for more details.
`Press Enter to continue.`
not sure what could be the case, is there any tasks that could be the root case for this ? for info i have the following exclusions