Closed jogold closed 10 months ago
@eladb any feedback on this? do you think this is of interest for the CDK?
Example of a DnsWrapper
could be:
class DnsWrapper extends cdk.Construct {
public readonly domainName: string;
constructor(scope: cdk.Construct, id: string, props: DnsWrapperProps) {
super(scope, id);
const existingNamespace = props.namespace || this.node.tryFindChild('Namespace') as servicediscovery.PrivateDnsNamespace;
const namespace = existingNamespace || new servicediscovery.PrivateDnsNamespace(this, 'Namespace', {
name: 'cdk.app',
vpc: props.vpc,
});
const service = new servicediscovery.Service(this, 'Service', {
dnsRecordType: servicediscovery.DnsRecordType.CNAME,
namespace,
});
service.registerCnameInstance('Resource', {
instanceCname: props.domainName,
});
this.domainName = `${service.serviceName}.${namespace.namespaceName}`;
}
}
which allows to export a domain name without coupling the stacks on the resource that creates this domain name (database, cluster, load balancer).
I've implemented something similar for some use cases I have which involve passing lambda version arns between stacks, which obviously will change frequently and therefore not work with the current export/import method used for cross stack references. This may just be biased because for my use cases, almost all of my cross stack references are things that may change, but I'd almost prefer if the default mechanism for cross stack references was via SSM parameters and not via export/import.
any news on this? I am running into issues with this regarding sharing load balancer attributes across stacks
This only makes sense if the loosely coupled resource has DeletionPolicy: Retain
, otherwise this will create downtime or even prevent the origin stack from being updated - for example, you cannot delete a VPC without deleting all subnets, so even if the coupling for the VPC id was loose, the VPC cannot be replaced.
The solution is to use nested stacks, which are supported in CDK since 1.12.0. In the VPC example, with StackA having VPC and StackB having a Subnet for that VPC, CloudFormation will happily create a new VPC in StackA, create a new Subnet in StackB, then delete old subnet in StackB, and then delete old VPC in StackA, making the resource replacement work as intended and without downtime.
This is a very interesting idea. I am transferring this to the RFC repo. Please follow the RFC Repo README in order to submit this as an RFC.
@nakedible Are you talking about a hypothetical feature or the current behavior? Doesn't the VPC ID output from VpcStack
(required by SubnetStack
) prevent VpcStack
from updating?
This can be implemented directly into the CDK's core
module.
I've opened a PR implementing this here - https://github.com/aws/aws-cdk/pull/8891
@nija-at following up on your PR and my comment that adding this to construct-compat
won’t work.
Can we add this as a feature of Stack
(i.e in StackProps
) ? Feels like the right place...
I am not sure it makes sense that any scope will have that setting because it’s very specific to the relationship between stacks.
@eladb - I suppose this implies that the user will configure this value?
While this does control relationship between stacks, this is dependent on the resource being referenced. It seems appropriate for the resource to mark itself as strong/weak. This also implies that not all resources exported from the same stack (and later imported) should be treated the same.
It might also be too late (hence, too painful) when the end user realizes that this property should be set.
The realization will likely happen when the stack update for this resource fails. At this point, the user will have to move through a set of transitions to migrate their stacks from using Fn::ImportValue
to SSM, because of the update/delete limitations that CloudFormation applies.
How about moving to a model where Construct
in constructs
contains an opaque key-value pair while Construct
in the CDK is a stateless proxy/facade to it (not a subclass), and give it stronger typing?
@nija-at before we dive into the implementation, can we start by enumerating a few use cases and thinking what is the mental model of the user of this feature. Who should be able to make a decision about whether a reference is strong or weak?
It seems to me like this is something that users should be able to configure when they decide how to lay out their stacks (as oppose to being specified at the construct level and vended through a library).
I can see how it makes sense for some references to be defined as weak and some as strong, but I am wondering what would be reasonable ergonomics to be able to indicate at the stack level which references should be what.
Some related issues for reference: https://github.com/aws/aws-cdk/issues/1972 https://github.com/aws/aws-cdk/issues/22842 https://github.com/aws/aws-cdk/discussions/23839?converting=1
As with https://github.com/aws/aws-cdk/issues/1972 we have created a LambdaLayer in LambdaLayerStack
which is then passed as a reference (we use TypeScript) to LambdaStack
where it's added as the layers
prop in NodejsFunction
for the functions.
The functions are then exported and in the APIGatewayStack
for the HttpLambdaIntegration
.
The other use case is Lambda@Edge where we export the experimental.EdgeFunction
from the EdgeStack
and then import in the CloudFrontStack
to be added as the edgeLambdas
in defaultBehavior
.
My simple understanding was that when you change and build a new EdgeFunction, the reference in the Behavior
would just get the updated version and everything would be fine. The function should be deployed as a new version and the that version should be references here:
The same goes for layers. All the lambdas using the layer would just get the new version updated under Layers
.
We are also using crossRegionalReference since Edge functions has to be located in us-east-1 and we're located in eu-west-1.
I'm interested in this, too.
In our case, we are using SSM parameters to pass references between stacks by default. We are aware that the relationships are not enforced this way, but we a) deploy all service stacks every time and b) refresh values in Lambda functions every few minutes anyway. Using SSM instead of CF outputs eliminates problems like mentioned above and also gives more flexibility when we want to delete stacks (on dev env) or quickly re-deploy the whole environment.
Right now, in CDK, in one stack we create SSM parameter, in other we read it with StringParameter.valueForStringParameter()
, using a naming convention. Having CDK handle this and reference it by simple stack variables would be great.
Partially implemented by https://github.com/aws/aws-cdk/pull/22008
Won't fix. The feature drawbacks are too many to warrant the cost. Please see if https://github.com/aws/aws-cdk/pull/22008 covers the main use case.
Description
For some CloudFormation resources an update sometimes requires a replacement.
Examples of such resources are
AWS::RDS::DBInstance
,AWS::DynamoDB::Table
orAWS::Lambda::LayerVersion
.Using those resources in a cross-stack fashion results in
Fn::ImportValue
s that will prevent those updates (lock situation).The CDK could offer a way to loosely couple stacks by wrapping values in a SSM parameter in the producing stack and unwrapping it in the consuming stack.
Proposed Solution
Add support for "weak references" which materialize through SSM parameters instead of CloudFormation exports/imports.
Things to consider:
cdk deploy
and through CI/CD. One way to achieve this is to somehow "salt" the reference name so that the SSM parameter will be immutable and a new parameter will be created every time the value changes. This will invalidate the consuming stack since the SSM parameter name will be different.Prototype
Create a
Wrapper
class:Usage:
Now we are able to update the
snapshotIdentifier
prop inStackA
(e.g. recover from snapshot after incident). There will be no downtime becauseDatabaseInstanceFromSnapshot
are retained (orphaned).Another solution here, since we are "transporting" an address, would have been to wrap it in a CNAME DNS record in a private hosted zone using Service Discovery...