Closed rschmied closed 8 months ago
addressed / partially fixed by PR #12 by using the hashicorp/cloudinit provider which allows to compress the ZIP the cloud init content. Also not Base64 encoding all the files and then re-encoding the result again -- the limit is pushed out a lot farther.
Problem description
AWS EC2 instances have a 16kb user data limit. This is easily reached in our case where we have quite a few scripts and configuration iles / data which are all injected into the EC2 instance via cloud-init.
The approach taken with the previous version was apparently more efficient in terms of user data size. With the refactor done to accommodate Azure (which, btw, does not have this problem, user data limit is, if at all existent, way beyond 16kb) this problem becomes more prevalent.
It seems like running on bare metal with no additional customization scripts is working OK (e.g. len(userdata) < 16kb). However, when adding just the "disable VMX" script to run on non-bare metal instances for testing gets the userdata over the threshold.
Workaround
A potential workaround for the moment is to remove all comments from the scripts as they exist in
modules/deploy/data
. Everything in the shell scripts that has a leading "#" marks a comment and can be removed to get the overall size below 16kb. This has to be done manually after cloning the repository.Potential solution
A potential solution is to use S3 storage and copy all the scripts into a bucket as a ZIP file. The user data would then simply copy that configuration ZIP back to e.g. /tmp, unzip it and run an "entry point" script from there.
However, this is more complicated when this approach should be done for different clouds as permissions and mechanics to up/download files from cloud storage might be different.
Maybe a "per cloud" approach is needed?