Open shancz opened 2 years ago
I assume this is not an ACK error. I don't see a way of setting a Clusters vpc here https://docs.aws.amazon.com/memorydb/latest/APIReference/API_CreateCluster.html
Thoughts @nmvk @kumargauravsharma ?
Amie,
Thank you for the quick response. The API only needs a valid subnet group, there is no option to specify the VPC name. So there should not be a check beyond the existence of the subnet group. Based on the error message the subnet is verified as to which VPC it belongs to but that may not be necessary. It correctly reports that the subnet group belongs to a different VPC other than the default one. We may not need this [VPC check] additional validation.
Steven
On Thu, Aug 25, 2022 at 8:04 PM Amine @.***> wrote:
I assume this is not an ACK error. I don't a way of setting a Clusters vpc here https://docs.aws.amazon.com/memorydb/latest/APIReference/API_CreateCluster.html
Thoughts @nmvk https://github.com/nmvk @kumargauravsharma https://github.com/kumargauravsharma ?
— Reply to this email directly, view it on GitHub https://github.com/aws-controllers-k8s/community/issues/1446#issuecomment-1227870912, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC4UCZSPTVKH3R7PZWRLCLTV3ACZXANCNFSM57UWOLRA . You are receiving this because you authored the thread.Message ID: @.***>
--
This email is intended only for the individual(s) to whom it is addressed and may be a confidential communication protected by law. Any unauthorized use, dissemination, distribution, disclosure, or copying is prohibited. Please notify the sender immediately by return email, if you believe you have received this message in error, and please delete it from your system.
Interesting. I did a quick search and found something very similar using CDK. My first assumption here is that maybe you should use Status.Subnets
instead of Spec.SubnetGroups
(if you are using SubnetGroup
CRD to create them)?
Amine,
Thank you for the response. Yes I am using CRD . so the yaml file has to follow the syntax/options available here https://aws-controllers-k8s.github.io/community/reference/memorydb/v1alpha1/cluster/ I'm not sure how I can use Status.Subnets instead.
Here is my YAML file:
apiVersion: memorydb.services.k8s.aws/v1alpha1 kind: Cluster
metadata: name: "ack-test"
spec: aclName: open-access autoMinorVersionUpgrade: true description: "test cluster created by ACK" engineVersion: '6.2' name: 'ack-test' nodeType: 'db.t4g.small' numReplicasPerShard: 1 numShards: 1 parameterGroupName: default.memorydb-redis6 securityGroupIDs:
The only way I can get it working is if the subnet group is in the default VPC.
Regards, Steven
On Fri, Aug 26, 2022 at 10:14 AM Amine @.***> wrote:
Interesting. I did a quick search and found something very similar using CDK https://stackoverflow.com/questions/59643874/aws-cdk-error-when-deploying-redis-elasticache-subnet-group-belongs-to-a-diffe. My first assumption here is that maybe you should use Status.Subnets instead of Spec.SubnetGroups (if you are using SubnetGroup CRD to create them)?
— Reply to this email directly, view it on GitHub https://github.com/aws-controllers-k8s/community/issues/1446#issuecomment-1228543599, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC4UCZXFTWH4VEBEM65HNETV3DGLNANCNFSM57UWOLRA . You are receiving this because you authored the thread.Message ID: @.***>
--
This email is intended only for the individual(s) to whom it is addressed and may be a confidential communication protected by law. Any unauthorized use, dissemination, distribution, disclosure, or copying is prohibited. Please notify the sender immediately by return email, if you believe you have received this message in error, and please delete it from your system.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
ping @aws-controllers-k8s/elasticache-maintainer /lifecycle frozen
Describe the bug While using the MemoryDB Cluster API via AWS Kubernetes one can only use the default VCP to create a MemoryDB cluster in.
Steps to reproduce Create a EKS cluster in AWS. Then try to create a MemoryDB cluster in it via the custom resource.
Here is the YAML file
Expected outcome MemoryDB cluster to be created.
Environment AWS