Open davidxjohnson opened 5 years ago
What did you set -i <cluster-name>
as, it looks like it thinks that is invalid. Background we use that to prefix the sns-topic that is created for each resource. @davidxjohnson
The deployment descriptor is thus:
apiVersion: apps/v1beta1
metadata:
name: aws-service-operator
namespace: aws-service-operator
spec:
replicas: 1
template:
metadata:
annotations:
iam.amazonaws.com/role: arn:aws:iam::XXXXXXXXXXXX:role/k8s-aws-service-operator
labels:
app: aws-service-operator
spec:
serviceAccountName: aws-service-operator
containers:
- name: aws-service-operator
image: awsserviceoperator/aws-service-operator:v0.0.1-alpha2
imagePullPolicy: Always
args:
- server
- --cluster-name=nonprod-us-east-1.mydomain.net
- --region=us-east-1
- --account-id=XXXXXXXXXXXX
- --bucket=mydomain-nonprod-aws-operator
Just realized from your reply that the dots in the cluster name are invalid as topic names.
Topic name contains invalid characters. Must contain only alphanumeric characters, hyphens (-), or underscores (_).
Yeah, that would cause the issue. I'm going to change the description of this issue to validation on the Cluster name.
We might want to wrap this into this issue - https://github.com/awslabs/aws-service-operator/issues/103
Changing the cluster name did the trick. I see successful sns subscription, topic and queue messages in teh logs.
I created the role and S3 bucket using a modified CF template. After editing the k8s yaml provided (to set account, region, cluster-name and bucket parameters), I deployed the k8s objects ... but the pod is in
CrashLoopBackOff
state.I checked
kube2iam
logs (snippet below), seems to be working:The operator logs indicate an error creating an SNS topic: