Closed rpbarnes closed 1 year ago
+1
+1
+1
+1337
+1
Minor improvement to the snippet above: instead of declaring a CustomInstanceType
and casting it via unknown
, you can simply use the InstanceType
constructor to specify the name of the instance type:
new DatabaseCluster(this, "MyCluster", {
...
instanceProps: {
instanceType: new InstanceType("serverless"),
...
},
});
I turned it into this ServerlessV2Cluster
construct for now:
+1
I had the same error in the console (restore point in time):
+1
+1
@thijsdaniels I am testing that code, and having this error:
1:56:55 PM | CREATE_FAILED | AWS::EC2::EIP | VpcTest2PublicSubnet2EIPXXXXX The maximum number of addresses has been reached. (Service: AmazonEC2; Status Code: 400; Error Code: AddressLimitExceeded; Request ID: ade29ee4-640e-4ff9-872d-XXXXXXXX; Proxy: null)
But I am using a private subnet, so I don't understand why this error appears
@wakusoftware I had a similar issue. There is a soft limit with AWS on the total number of Elastic IPs you can have in your account. You can request an increase in this limit using the Service Quota area of the AWS console. I raised my limit from 5 to 8 and was then able get the code to work.
@rohalla2 thanks, but I have 4 EIP, I have not reached the quota and the error still persists
@wakusoftware If you have 4 already you will need to request an increase. The code above created 3 EIPs for me. So you will need a new limit of at least 7.
@rohalla2 why does it creates 3 EIPs?
@rohalla2 why does it creates 3 EIPs?
The default when creating a VPC appears to be to create 3 subnets. Each gets its own EIP.
@rohalla2 understood, thanks
+1
+1
+1
+1
+1
+1
+1
+1
@rpbarnes Can you edit your original post to make it clear that people should "like" it to show their interest instead of leaving +1 comments?
+1
A example for aurora-mysql.
related #20632
import { Stack, StackProps } from "aws-cdk-lib";
import { Construct } from "constructs";
import { aws_rds as rds } from "aws-cdk-lib";
import { aws_iam as iam } from "aws-cdk-lib";
import { aws_ec2 as ec2 } from "aws-cdk-lib";
import { custom_resources as cr } from "aws-cdk-lib";
import * as cdk from "aws-cdk-lib";
export class CdkAuroraSeverlessv2Stack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
enum ServerlessInstanceType {
SERVERLESS = "serverless",
}
type CustomInstanceType = ServerlessInstanceType | ec2.InstanceType;
const CustomInstanceType = {
...ServerlessInstanceType,
...ec2.InstanceType,
};
const dbClusterInstanceCount: number = 1;
const vpc = new ec2.Vpc(this, "Vpc", {
maxAzs: 2,
});
const dbCluster = new rds.DatabaseCluster(this, "AuroraServerlessv2", {
engine: rds.DatabaseClusterEngine.auroraMysql({
version: rds.AuroraMysqlEngineVersion.VER_3_02_0,
}),
instances: dbClusterInstanceCount,
instanceProps: { vpc },
monitoringInterval: cdk.Duration.seconds(10),
});
const serverlessV2ScalingConfiguration = {
MinCapacity: 0.5,
MaxCapacity: 16,
};
const dbScalingConfigure = new cr.AwsCustomResource(
this,
"DbScalingConfigure",
{
onCreate: {
service: "RDS",
action: "modifyDBCluster",
parameters: {
DBClusterIdentifier: dbCluster.clusterIdentifier,
ServerlessV2ScalingConfiguration: serverlessV2ScalingConfiguration,
},
physicalResourceId: cr.PhysicalResourceId.of(
dbCluster.clusterIdentifier
),
},
onUpdate: {
service: "RDS",
action: "modifyDBCluster",
parameters: {
DBClusterIdentifier: dbCluster.clusterIdentifier,
ServerlessV2ScalingConfiguration: serverlessV2ScalingConfiguration,
},
physicalResourceId: cr.PhysicalResourceId.of(
dbCluster.clusterIdentifier
),
},
policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE,
}),
}
);
const cfnDbCluster = dbCluster.node.defaultChild as rds.CfnDBCluster;
const dbScalingConfigureTarget = dbScalingConfigure.node.findChild(
"Resource"
).node.defaultChild as cdk.CfnResource;
cfnDbCluster.addPropertyOverride("EngineMode", "provisioned");
dbScalingConfigure.node.addDependency(cfnDbCluster);
dbScalingConfigureTarget.node.addDependency(
dbCluster.node.findChild(`Instance1`) as rds.CfnDBInstance
);
const serverlessDBinstance = new rds.CfnDBInstance(
this,
"ServerlessInstance",
{
dbClusterIdentifier: dbCluster.clusterIdentifier,
dbInstanceClass: "db.serverless",
engine: "aurora-mysql",
engineVersion: "8.0.mysql_aurora.3.02.0",
monitoringInterval: 10,
monitoringRoleArn: (
dbCluster.node.findChild("MonitoringRole") as iam.Role
).roleArn,
}
);
serverlessDBinstance.node.addDependency(dbScalingConfigureTarget);
}
}
+1
+1
+1
+1
@jvlch This is working great! Thanks so much! One newbie question, is there a way to enable multi az with CDK/custom resource?
Updated:
it turns out setting onePerAz
prop under vpcSubnets
provides multi AZ deployemnt.
For example,
const dbCluster = new rds.DatabaseCluster(stack, 'database-cluster', {
engine: rds.DatabaseClusterEngine.auroraPostgres({
version: rds.AuroraPostgresEngineVersion.VER_13_6,
}),
instances: dbClusterInstanceCount,
instanceProps: {
...
vpcSubnets: {
onePerAz: true,
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
},
...
},
...
});
+1
+1
I tried the following , but both provisioned instance and serverless instance is created. Are there way serverless instance only created
A example for aurora-mysql.
related #20632
import { Stack, StackProps } from "aws-cdk-lib"; import { Construct } from "constructs"; import { aws_rds as rds } from "aws-cdk-lib"; import { aws_iam as iam } from "aws-cdk-lib"; import { aws_ec2 as ec2 } from "aws-cdk-lib"; import { custom_resources as cr } from "aws-cdk-lib"; import * as cdk from "aws-cdk-lib"; export class CdkAuroraSeverlessv2Stack extends Stack { constructor(scope: Construct, id: string, props?: StackProps) { super(scope, id, props); enum ServerlessInstanceType { SERVERLESS = "serverless", } type CustomInstanceType = ServerlessInstanceType | ec2.InstanceType; const CustomInstanceType = { ...ServerlessInstanceType, ...ec2.InstanceType, }; const dbClusterInstanceCount: number = 1; const vpc = new ec2.Vpc(this, "Vpc", { maxAzs: 2, }); const dbCluster = new rds.DatabaseCluster(this, "AuroraServerlessv2", { engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_3_02_0, }), instances: dbClusterInstanceCount, instanceProps: { vpc }, monitoringInterval: cdk.Duration.seconds(10), }); const serverlessV2ScalingConfiguration = { MinCapacity: 0.5, MaxCapacity: 16, }; const dbScalingConfigure = new cr.AwsCustomResource( this, "DbScalingConfigure", { onCreate: { service: "RDS", action: "modifyDBCluster", parameters: { DBClusterIdentifier: dbCluster.clusterIdentifier, ServerlessV2ScalingConfiguration: serverlessV2ScalingConfiguration, }, physicalResourceId: cr.PhysicalResourceId.of( dbCluster.clusterIdentifier ), }, onUpdate: { service: "RDS", action: "modifyDBCluster", parameters: { DBClusterIdentifier: dbCluster.clusterIdentifier, ServerlessV2ScalingConfiguration: serverlessV2ScalingConfiguration, }, physicalResourceId: cr.PhysicalResourceId.of( dbCluster.clusterIdentifier ), }, policy: cr.AwsCustomResourcePolicy.fromSdkCalls({ resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE, }), } ); const cfnDbCluster = dbCluster.node.defaultChild as rds.CfnDBCluster; const dbScalingConfigureTarget = dbScalingConfigure.node.findChild( "Resource" ).node.defaultChild as cdk.CfnResource; cfnDbCluster.addPropertyOverride("EngineMode", "provisioned"); dbScalingConfigure.node.addDependency(cfnDbCluster); dbScalingConfigureTarget.node.addDependency( dbCluster.node.findChild(`Instance1`) as rds.CfnDBInstance ); const serverlessDBinstance = new rds.CfnDBInstance( this, "ServerlessInstance", { dbClusterIdentifier: dbCluster.clusterIdentifier, dbInstanceClass: "db.serverless", engine: "aurora-mysql", engineVersion: "8.0.mysql_aurora.3.02.0", monitoringInterval: 10, monitoringRoleArn: ( dbCluster.node.findChild("MonitoringRole") as iam.Role ).roleArn, } ); serverlessDBinstance.node.addDependency(dbScalingConfigureTarget); } }
[UPDATE 1 on this post: get rid of additional DB instance] [UPDATE 2 on this post: fix connection timeout by defining inbound rule for security group] [UPDATE 3 on this post: inject VPC to not create always a new one, define cluster id to not rely on cryptic names, use secret manager, inject IP allowlist, ]
@clouddev-code I had the same issue and could solve it like this:
import {Stack, StackProps} from "aws-cdk-lib";
import { Construct } from "constructs";
import { aws_rds as rds } from "aws-cdk-lib";
import { aws_ec2 as ec2 } from "aws-cdk-lib";
import { custom_resources as cr } from "aws-cdk-lib";
import * as cdk from "aws-cdk-lib";
import type {IVpc} from "aws-cdk-lib/aws-ec2";
import {SubnetType} from "aws-cdk-lib/aws-ec2";
import {Credentials} from "aws-cdk-lib/aws-rds";
import {RetentionDays} from "aws-cdk-lib/aws-logs";
import type {IAllowedPeer} from "../interfaces/IAllowedPeer";
import {Secret} from "aws-cdk-lib/aws-secretsmanager";
import {SecretProps} from "aws-cdk-lib/aws-secretsmanager/lib/secret";
type ScalingConfiguration = {
MinCapacity: number;
MaxCapacity: number;
}
interface DatabaseProps extends StackProps {
databaseName: string,
vpc: IVpc,
allowedPeers: IAllowedPeer[],
dbClusterIdentifier: string,
secretProps: SecretProps,
scalingConfiguration: ScalingConfiguration;
}
export class AuroraServerlessDatabaseStack extends Stack {
constructor(scope: Construct, id: string, props: DatabaseProps) {
super(scope, id, props);
enum ServerlessInstanceType {
SERVERLESS = "serverless",
}
type CustomInstanceType = ServerlessInstanceType | ec2.InstanceType;
const CustomInstanceType = {
...ServerlessInstanceType,
...ec2.InstanceType,
};
const dbClusterInstanceCount: number = 1;
//creation of initial root user
// any further user create via sql 'CREATE USER'
const rdsSecret: Secret = new Secret(this, 'RdsSecrets', props.secretProps);
const dbCluster = new rds.DatabaseCluster(
this, "ServerlessAuroraDatabase", {
engine: rds.DatabaseClusterEngine.auroraMysql({
version: rds.AuroraMysqlEngineVersion.VER_3_02_0,
}),
instances: dbClusterInstanceCount,
clusterIdentifier: props.dbClusterIdentifier,
defaultDatabaseName: props.databaseName,
instanceProps: {
vpc: props.vpc,
vpcSubnets: {
subnetType: SubnetType.PUBLIC,
},
instanceType: CustomInstanceType.SERVERLESS as unknown as ec2.InstanceType,
autoMinorVersionUpgrade: true,
allowMajorVersionUpgrade: false,
publiclyAccessible: true,
},
monitoringInterval: cdk.Duration.seconds(60),
//monitoringRole: optional, creates a new IAM role by default
cloudwatchLogsExports: ['error', 'general', 'slowquery', 'audit'], // Export all available MySQL-based logs
cloudwatchLogsRetention: RetentionDays.THREE_MONTHS, // Optional - default is to never expire logs
credentials: Credentials.fromSecret(rdsSecret),
backup: {
retention: cdk.Duration.days(30),
preferredWindow: '01:00-02:00'
},
preferredMaintenanceWindow: 'Tue:00:15-Tue:00:45',
});
props.allowedPeers.forEach(function (allowedPeer) {
dbCluster.connections.allowDefaultPortFrom(allowedPeer.peer, allowedPeer.description);
})
const dbScalingConfigure = new cr.AwsCustomResource(
this,
"DbScalingConfigure",
{
onCreate: {
service: "RDS",
action: "modifyDBCluster",
parameters: {
DBClusterIdentifier: dbCluster.clusterIdentifier,
ServerlessV2ScalingConfiguration: props.scalingConfiguration,
},
physicalResourceId: cr.PhysicalResourceId.of(
dbCluster.clusterIdentifier
),
},
onUpdate: {
service: "RDS",
action: "modifyDBCluster",
parameters: {
DBClusterIdentifier: dbCluster.clusterIdentifier,
ServerlessV2ScalingConfiguration: props.scalingConfiguration,
},
physicalResourceId: cr.PhysicalResourceId.of(
dbCluster.clusterIdentifier
),
},
policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE,
}),
}
);
const cfnDbCluster = dbCluster.node.defaultChild as rds.CfnDBCluster;
const dbScalingConfigureTarget = dbScalingConfigure.node.findChild(
"Resource"
).node.defaultChild as cdk.CfnResource;
cfnDbCluster.addPropertyOverride("EngineMode", "provisioned");
dbScalingConfigure.node.addDependency(cfnDbCluster);
for (let i = 1 ; i <= dbClusterInstanceCount ; i++) {
(dbCluster.node.findChild(`Instance${i}`) as rds.CfnDBInstance).addDependsOn(dbScalingConfigureTarget)
}
}
}
@hacker65536 Can you please explain why you originally added const serverlessDBinstance = new rds.CfnDBInstance(...);
?
@Celebrate-Reinhard With the above code, we were able to create only an instance of aurora serverless v2.
@Celebrate-Reinhard
For production workloads, it is necessary to have the same instance type for failover, but sometimes serverless is better in terms of cost, so I gave an example as such a use case.
wasn't able to use solutions from above in python (because class should be registered in JSII and seems should be defined in both js and python 🤔 ).
pulumi (third-party commerical alternative to cloudformation) with python support already supports serverless v2.
Any ETA on this one?
@driverpt after this issue is fixed. ⬇️ https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/issues/1150
Yes, this is definitely on our roadmap. Not sure when we're going to get to it though - leaving +1s on the issue will definitely help us prioritize.
We also encourage community contributions. Our "Contributing" guide: https://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md.
Thanks, Adam
+1
+1
For those that are on .Net-land and want to save some time. Thanks to those above for illuminating the way forward.
` using Amazon.CDK; using Constructs; using Amazon.CDK.AWS.RDS; using Amazon.CDK.AWS.EC2; using Amazon.CDK.AWS.SecretsManager; using System.Collections.Generic; using Amazon.CDK.CustomResources; using Amazon.CDK.AWS.Logs;
namespace TestCdkAuroraServerless2 { public class TestCdkAuroraServerless2Stack : Stack { string vpcId = "vpc-..."; string subnetId1 = "subnet-..."; string subnetId2 = "subnet-..."; string securityGroupId = "sg-..."; string availabilityZone = "eu-west-2a";
public ISecurityGroup SecurityGroup;
public IVpc Vpc;
public ISubnet Subnet;
public ISecret Secret;
internal TestCdkAuroraServerless2Stack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// The code that defines your stack goes here
this.Tags.SetTag("application:environment", stackname);
Subnet = Amazon.CDK.AWS.EC2.Subnet.FromSubnetId(this, "SubnetFromId1", subnetId1);
SecurityGroup = Amazon.CDK.AWS.EC2.SecurityGroup.FromSecurityGroupId(this, "SecurityGroup", securityGroupId);
Vpc = Amazon.CDK.AWS.EC2.Vpc.FromVpcAttributes(this, "Vpc", new VpcAttributes
{
AvailabilityZones = new[] { availabilityZone },
PublicSubnetIds = new[] { subnetId1, subnetId2 },
VpcId = vpcId,
});
CustomInstanceType customInstanceType = new CustomInstanceType("serverless");
int dbClusterInstanceCount = 1;
DatabaseCluster dbCluster = new DatabaseCluster(this, "AuroraServerlessv2", new DatabaseClusterProps {
Engine = DatabaseClusterEngine.AuroraPostgres(new AuroraPostgresClusterEngineProps { Version = AuroraPostgresEngineVersion.VER_13_6 }),
Credentials = Credentials.FromGeneratedSecret("clusteradmin"), // Optional - will default to 'admin' username and generated password
Instances = dbClusterInstanceCount,
InstanceProps = new Amazon.CDK.AWS.RDS.InstanceProps
{
VpcSubnets = new SubnetSelection
{
SubnetType = SubnetType.PUBLIC
},
Vpc = Vpc,
InstanceType = customInstanceType,
AutoMinorVersionUpgrade = true,
AllowMajorVersionUpgrade = false,
},
MonitoringInterval = Duration.Seconds(10),
CloudwatchLogsRetention = RetentionDays.THREE_MONTHS,
Backup = new BackupProps
{
Retention = Duration.Days(7),
PreferredWindow = "01:00-02:00"
},
PreferredMaintenanceWindow = "Mon:00:15-Mon:00:45",
});;
Dictionary<string, object> serverlessV2ScalingConfiguration = new Dictionary<string, object>();
serverlessV2ScalingConfiguration.Add("MinCapacity", 0.5);
serverlessV2ScalingConfiguration.Add("MaxCapacity", 16);
Dictionary<string, object> parameters = new Dictionary<string, object>();
parameters.Add("DBClusterIdentifier", dbCluster.ClusterIdentifier);
parameters.Add("ServerlessV2ScalingConfiguration", serverlessV2ScalingConfiguration);
AwsCustomResource dbScalingConfigure = new AwsCustomResource(this, "DbScalingConfigure", new AwsCustomResourceProps
{
OnCreate = new AwsSdkCall
{
Service = "RDS",
Action = "modifyDBCluster",
Parameters = parameters ,
PhysicalResourceId = PhysicalResourceId.Of(dbCluster.ClusterIdentifier),
},
OnUpdate = new AwsSdkCall
{
Service = "RDS",
Action = "modifyDBCluster",
Parameters = parameters,
PhysicalResourceId = PhysicalResourceId.Of(dbCluster.ClusterIdentifier),
},
Policy = AwsCustomResourcePolicy.FromSdkCalls(new SdkCallsPolicyOptions { Resources = AwsCustomResourcePolicy.ANY_RESOURCE }),
});
CfnDBCluster cfnDbCluster = (CfnDBCluster)dbCluster.Node.DefaultChild;
CfnResource dbScalingConfigureTarget = (CfnResource)dbScalingConfigure.Node.FindChild("Resource").Node.DefaultChild;
cfnDbCluster.AddPropertyOverride("EngineMode", "provisioned");
dbScalingConfigure.Node.AddDependency(cfnDbCluster);
for (int i = 1; i <= dbClusterInstanceCount; i++)
{
CfnDBInstance instance = (CfnDBInstance)dbCluster.Node.FindChild($"Instance{i}");
instance.AddDependsOn(dbScalingConfigureTarget);
}
}
}
public class CustomInstanceType : InstanceType
{
InstanceType InstanceType;
public CustomInstanceType(string customInstanceType) : base(customInstanceType)
{
InstanceType = new InstanceType(customInstanceType);
}
}
}
`
@andrewtravis
I'm new to the CDK.
Can you expand on: CustomInstanceType customInstanceType = new CustomInstanceType("serverless");
I presume you have incorporated the following logic into it?
enum ServerlessInstanceType { SERVERLESS = "serverless", }
type CustomInstanceType = ServerlessInstanceType | ec2.InstanceType;
const CustomInstanceType = {
...ServerlessInstanceType,
...ec2.InstanceType,
};
Does the CustomInstanceType class that extends InstanceType ?
Could you share your CustomInstanceType ?
I'm trying to create a Java version
Thanks, Adrian.
+1
[UPDATE 1 on this post: get rid of additional DB instance] [UPDATE 2 on this post: fix connection timeout by defining inbound rule for security group] [UPDATE 3 on this post: inject VPC to not create always a new one, define cluster id to not rely on cryptic names, use secret manager, inject IP allowlist, ]
@clouddev-code I had the same issue and could solve it like this:
import {Stack, StackProps} from "aws-cdk-lib"; import { Construct } from "constructs"; import { aws_rds as rds } from "aws-cdk-lib"; import { aws_ec2 as ec2 } from "aws-cdk-lib"; import { custom_resources as cr } from "aws-cdk-lib"; import * as cdk from "aws-cdk-lib"; import type {IVpc} from "aws-cdk-lib/aws-ec2"; import {SubnetType} from "aws-cdk-lib/aws-ec2"; import {Credentials} from "aws-cdk-lib/aws-rds"; import {RetentionDays} from "aws-cdk-lib/aws-logs"; import type {IAllowedPeer} from "../interfaces/IAllowedPeer"; import {Secret} from "aws-cdk-lib/aws-secretsmanager"; import {SecretProps} from "aws-cdk-lib/aws-secretsmanager/lib/secret"; type ScalingConfiguration = { MinCapacity: number; MaxCapacity: number; } interface DatabaseProps extends StackProps { databaseName: string, vpc: IVpc, allowedPeers: IAllowedPeer[], dbClusterIdentifier: string, secretProps: SecretProps, scalingConfiguration: ScalingConfiguration; } export class AuroraServerlessDatabaseStack extends Stack { constructor(scope: Construct, id: string, props: DatabaseProps) { super(scope, id, props); enum ServerlessInstanceType { SERVERLESS = "serverless", } type CustomInstanceType = ServerlessInstanceType | ec2.InstanceType; const CustomInstanceType = { ...ServerlessInstanceType, ...ec2.InstanceType, }; const dbClusterInstanceCount: number = 1; //creation of initial root user // any further user create via sql 'CREATE USER' const rdsSecret: Secret = new Secret(this, 'RdsSecrets', props.secretProps); const dbCluster = new rds.DatabaseCluster( this, "ServerlessAuroraDatabase", { engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_3_02_0, }), instances: dbClusterInstanceCount, clusterIdentifier: props.dbClusterIdentifier, defaultDatabaseName: props.databaseName, instanceProps: { vpc: props.vpc, vpcSubnets: { subnetType: SubnetType.PUBLIC, }, instanceType: CustomInstanceType.SERVERLESS as unknown as ec2.InstanceType, autoMinorVersionUpgrade: true, allowMajorVersionUpgrade: false, publiclyAccessible: true, }, monitoringInterval: cdk.Duration.seconds(60), //monitoringRole: optional, creates a new IAM role by default cloudwatchLogsExports: ['error', 'general', 'slowquery', 'audit'], // Export all available MySQL-based logs cloudwatchLogsRetention: RetentionDays.THREE_MONTHS, // Optional - default is to never expire logs credentials: Credentials.fromSecret(rdsSecret), backup: { retention: cdk.Duration.days(30), preferredWindow: '01:00-02:00' }, preferredMaintenanceWindow: 'Tue:00:15-Tue:00:45', }); props.allowedPeers.forEach(function (allowedPeer) { dbCluster.connections.allowDefaultPortFrom(allowedPeer.peer, allowedPeer.description); }) const dbScalingConfigure = new cr.AwsCustomResource( this, "DbScalingConfigure", { onCreate: { service: "RDS", action: "modifyDBCluster", parameters: { DBClusterIdentifier: dbCluster.clusterIdentifier, ServerlessV2ScalingConfiguration: props.scalingConfiguration, }, physicalResourceId: cr.PhysicalResourceId.of( dbCluster.clusterIdentifier ), }, onUpdate: { service: "RDS", action: "modifyDBCluster", parameters: { DBClusterIdentifier: dbCluster.clusterIdentifier, ServerlessV2ScalingConfiguration: props.scalingConfiguration, }, physicalResourceId: cr.PhysicalResourceId.of( dbCluster.clusterIdentifier ), }, policy: cr.AwsCustomResourcePolicy.fromSdkCalls({ resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE, }), } ); const cfnDbCluster = dbCluster.node.defaultChild as rds.CfnDBCluster; const dbScalingConfigureTarget = dbScalingConfigure.node.findChild( "Resource" ).node.defaultChild as cdk.CfnResource; cfnDbCluster.addPropertyOverride("EngineMode", "provisioned"); dbScalingConfigure.node.addDependency(cfnDbCluster); for (let i = 1 ; i <= dbClusterInstanceCount ; i++) { (dbCluster.node.findChild(`Instance${i}`) as rds.CfnDBInstance).addDependsOn(dbScalingConfigureTarget) } } }
@hacker65536 Can you please explain why you originally added
const serverlessDBinstance = new rds.CfnDBInstance(...);
?
This solution works for me on first launch, but whenever I update the Serverless Scaling Config, for example min capacity, the cloudformation stack gets stuck in UPDATE_IN_PROGRESS, despite the Aurora cluster is updated successfully, seems like it does not receive the signal to finish or something... anyone having a similar issue?
For those who need a python version:
from aws_cdk import (
Stack,
CfnOutput,
Environment,
Tags,
Duration,
custom_resources as cr,
aws_ec2 as ec2,
aws_rds as rds,
aws_secretsmanager as secretsmanager,
aws_kms as kms,
)
from constructs import Construct
class DatabaseStack(Stack):
def __init__(
self,
scope: Construct,
_id: str,
vpc: ec2.IVpc,
stage_name: str,
db_user: str="changeme",
db_name: str="changeme",
**kwargs,
) -> None:
super().__init__(scope, _id, **kwargs)
secret = rds.DatabaseSecret(self, "AuroraSecret", username=db_user)
aurora_cluster_credentials = rds.Credentials.from_secret(secret, db_user)
kms_key = kms.Key(
self, "AuroraDatabaseKey", enable_key_rotation=True, alias=_id
)
# parameter_group = rds.ParameterGroup.from_parameter_group_name(
# self, "ParameterGroup", "default.aurora-postgresql14"
# )
instance_count = 1
cluster = rds.DatabaseCluster(
self,
"AuroraCluster",
engine=rds.DatabaseClusterEngine.aurora_postgres(
version=rds.AuroraPostgresEngineVersion.VER_13_7
),
cluster_identifier=_id,
instances=instance_count,
instance_props=rds.InstanceProps(
vpc=vpc,
instance_type=ec2.InstanceType("serverless"),
auto_minor_version_upgrade=True,
allow_major_version_upgrade=False,
publicly_accessible=True,
),
# engine=rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
# engine_version="14.3",
# parameter_group=parameter_group,
credentials=aurora_cluster_credentials,
default_database_name=db_name,
backup=rds.BackupProps(
retention=Duration.days(14),
),
)
parameters = {
"DBClusterIdentifier": cluster.cluster_identifier,
"ServerlessV2ScalingConfiguration": {
"MinCapacity": 0.5,
"MaxCapacity": 16,
},
}
scaling = cr.AwsCustomResource(
self,
"Scaling",
on_create=cr.AwsSdkCall(
service="RDS",
action="modifyDBCluster",
parameters=parameters,
physical_resource_id=cr.PhysicalResourceId.of(
cluster.cluster_identifier
),
),
on_update=cr.AwsSdkCall(
service="RDS",
action="modifyDBCluster",
parameters=parameters,
physical_resource_id=cr.PhysicalResourceId.of(
cluster.cluster_identifier
),
),
policy=cr.AwsCustomResourcePolicy.from_sdk_calls(
resources=cr.AwsCustomResourcePolicy.ANY_RESOURCE,
),
)
cfnCluster = cluster.node.default_child
target = scaling.node.find_child("Resource").node.default_child
cfnCluster.add_property_override("EngineMode", "provisioned")
scaling.node.add_dependency(cfnCluster)
for i in range(1, instance_count + 1):
cluster.node.find_child(f"Instance{i}").add_depends_on(target)
Wasn't able to get 14 going, with DBClusterParameterGroup not found: default.aurora-postgresql14
error. Otherwise, this worked.
+1
(I'd give it a +1K if I could)!!!!
Note, Terraform supports v2. Cloudformation and CDK do not.
Describe the feature
Please like the original post instead of leaving a +1 comment.
Add CDK support for aurora serverless v2 ideally via the ServerlessCluster construct.
edit there's a few solutions using base cloud formation constructs in the comments. Please see those for a work around.
Currently deploying a Serverless V2 instance via cdk doesn't seem possible.
I've got this far
but I keep running into this error
Which I can't figure out a way around.
Use Case
Need to create a DB via CDK. I want to use Serverless V2 because of the support for mysql 8.0.
Proposed Solution
No response
Other Information
No response
Acknowledgements
CDK version used
2.15.0
Environment details (OS name and version, etc.)
mac OS X