Open ambsw-technology opened 4 years ago
P.S. the new infrastructure is demonstrated in the in-process cfn-workspaces-provider
which will eventually bridge the gap between the Directory Service and Workspace resources by registering a directory with workspaces (create/delete implemented) and creating workspace users (still WIP).
I even refactored the commits so the first is this repo with this PR included and the second (and subsequent) is an implementation on top of it.
When testing my derived template, I realized that the bare error for an incorrect Custom::
name in provider.py
wasn't returning a response. This commit resolves that by using the base Provider
class to respond to this situation in (hopefully) sane ways.
That's how cfn-certificate-provider
works and I found the error very confusing. I spent a lot of time trying to figure out what was wrong inside that provider when I actually had a simple typo in the Custom::
name. PR #1 (already merged) actually changed it to an exception. At the time it didn't occur to me that I had eliminated the return message to CF.
P.S. I can certainly rip it out if you don't like it, but this works really well:
Thank you for your hard work on the template. My design philosophy is to minimize variations to improve the reproducibility of the artifacts and mental strain of software development. So things which do not need to be variable, I prefer to have fixed. For every variable setting may produce a different result.
The design choice is to hardcode it, as I do not want the name of the component to change just because somebody different checkout or changed the name of the root directory.
I would prefer a cookiecutter template. checkout -> https://github.com/binxio/cookiecutter-sam-custom-resource-provider
There is no need for a separate build.sh; The provider can be compiled and unit tested using pipenv. The zip file you create locally may not have the correct binaries in there. The reason for the Dockerfile.lambda is to create a zip file for thi
2a. For Dockerfile.build, you need to pass the PACKAGE as a build arg so the internal folder is named correctly.
The dockerfile works for any provider, it is blissfully unaware and just makes a zip for you. I do not see the need to pass in the name. I also saw that cloudformation template was parameterized.
I would prefer a cookiecutter template. checkout -> https://github.com/binxio/cookiecutter-sam-custom-resource-provider
ok.
ok.
It is already easy if you really wanted that: make AWS_REGION=
Check the the review comments.
As you mentioned your PR is too big; The changes to the build system I will not merge.
I understand the cookie cutter logic, but it looked like it was going to create collision and confusion inside the AWS deployment process. For example, everything was going to get put at cfn-custom-resource
-- the zip file, the provider stack, the lambda, and the Log Group. How are you supposed to create a second resource? A global find-and-replace? That's not automation and, if you miss e.g. the log group, everything will deploy fine but logs for all your providers will end up in the same Log Group.
You actually use shell basename
in the current package to name the Docker image. I was inspired to generalize it so a user didn't have to touch a Makefile
(or figure out how .release
works -- as I did) and still get good deployment defaults.
In my experience, the default behavior for git is to name the folder after the repo name. Once the base package is renamed, you should get a very reliable experience -- and a hard fork that gets renamed would automatically namespace itself. A good alternative (in the spirit of #5) would be to force the user to provide a single, explicit value in e.g. a .package
file, but I found the status quo untenable.
Dockerfile.build
was developed as a windows shim, but it'd be even better if there was a single cross-platform build strategy. I'm pretty sure build.sh
demonstrates this possibility -- run locally w/o docker, in Dockerfile.build
or used by Dockerfile.lambda
. In theory, the manual step goes away by wrapping it in e.g. a build_env.bat
(and/or equivalent linux script) that populates this argument automatically. I just wasn't going to invest energy making it work better if it was going to get rejected.
(this is auto-renumbering 5) If you prefer no default, that's fine. My goal, once again, was to ensure the user did not need to touch the Makefile to get well-behaved and package-specific results. If they don't have to do anything to the build files, it would also make it easy/possible to pull/merge in upstream fixes/changes to these files.
P.S. and (2) also goes away if the repo uses a .package
(or similar) file because the Dockerfile
would just ADD
that file and it wouldn't matter what the internal directory structure was.
This PR includes several significant changes/upgrades:
${shell basename ${PWD}}
(i.e. folder name) throughout the system so the Lambda Function, Log Group, etc. are all named correctly.build.sh
. This file is now used byDockerfile.lambda
but can also be called "locally" inside a docker container built usingDockerfile.build
(e.g. the build environment on my windows machine) or presumably your local system. ForDockerfile.build
, you need to pass thePACKAGE
as a build arg so the internal folder is named correctly.cfn-resource-provider
AWS_REGION
andS3_BUCKET_PREFIX
by setting them in the environment. This makes it easy to change the region and namespace globally before you start running commands.If you object to any of these changes, I can revert them. I'll update the
README
to reflect any of the changes that are acceptable. I'm sorry for the mega PR, but this was going to be very messy to break out into non-conflicting commits.