Closed obriensystems closed 8 months ago
Same issue on redeployed cloud-setup
michael@cloudshell:~/kcc-cso/kpt (kcc-cso-4380)$ kpt live status core-landing-zone | grep error
inventory-36537147/storagebucket.storage.cnrm.cloud.google.com/logging/security-incident-log-bucket is Failed: Update call failed: error fetching live state: error reading underlying resource: summary: Error when reading or editing Storage Bucket "security-incident-log-bucket": googleapi: Error 403: logging-sa@kcc-cso-4380.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist)., forbidden
michael@cloudshell:~/kcc-cso/kpt (kcc-cso-4380)$ kubectl describe storagebucket.storage.cnrm.cloud.google.com/security-incident-log-bucket -n logging
Name: security-incident-log-bucket
Namespace: logging
Labels: <none>
Annotations: cnrm.cloud.google.com/blueprint: kpt-pkg-fn-live
cnrm.cloud.google.com/management-conflict-prevention-policy: none
cnrm.cloud.google.com/project-id: logging-project-cso2
cnrm.cloud.google.com/state-into-spec: merge
config.k8s.io/owning-inventory: ec099affabc09ae4652ae62190d9b794c9ec63d1-1706718583884502216
config.kubernetes.io/depends-on: resourcemanager.cnrm.cloud.google.com/namespaces/projects/Project/logging-project-cso2
internal.kpt.dev/upstream-identifier: storage.cnrm.cloud.google.com|StorageBucket|logging|security-incident-log-bucket
API Version: storage.cnrm.cloud.google.com/v1beta1
Kind: StorageBucket
Metadata:
Creation Timestamp: 2024-01-31T16:33:31Z
Generation: 1
Resource Version: 4501241
UID: b6cc605b-ac0b-45ae-ab03-0854998ab193
Spec:
Autoclass:
Enabled: true
Location: northamerica-northeast1
Public Access Prevention: enforced
Retention Policy:
Is Locked: false
Retention Period: 86400
Uniform Bucket Level Access: true
Status:
Conditions:
Last Transition Time: 2024-01-31T16:33:31Z
Message: Update call failed: error fetching live state: error reading underlying resource: summary: Error when reading or editing Storage Bucket "security-incident-log-bucket": googleapi: Error 403: logging-sa@kcc-cso-4380.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist)., forbidden
Reason: UpdateFailed
Status: False
Type: Ready
Observed Generation: 1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning UpdateFailed 93s (x22 over 33m) storagebucket-controller Update call failed: error fetching live state: error reading underlying resource: summary: Error when reading or editing Storage Bucket "security-incident-log-bucket": googleapi: Error 403: logging-sa@kcc-cso-4380.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist)., forbidden
however the logging-sa is missing Storage Admin
logging-sa@kcc-cso-4380.iam.gserviceaccount.com | logging-sa | Logging AdminMonitoring Admin
https://cloud.google.com/storage/docs/access-control/iam-roles
Storage Admin (roles/storage.admin) | Grants full control of buckets, managed folders, and objects, including getting and setting object ACLs or IAM policies.When applied to an individual bucket, control applies only to the specified bucket and the managed folders and objects within the bucket. | firebase.projects.getorgpolicy.policy.get1resourcemanager.projects.get2resourcemanager.projects.list2storage.buckets.*storage.managedFolders.*storage.objects.*storage.multipartUploads.*
-- | -- | --
role: roles/storage.admin
Already there - verifying why the apply did not take
logging-sa@kcc-cso-4380.iam.gserviceaccount.com | logging-sa | Logging AdminMonitoring AdminStorage Admin
-- | -- | --
# Grant GCP role Storage Admin to GCP SA on logging project
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicyMember
metadata: # kpt-merge: projects/logging-sa-storageadmin-logging-project-id-permissions
name: logging-sa-storageadmin-logging-project-cso2-permissions # kpt-set: logging-sa-storageadmin-${logging-project-id}-permissions
namespace: projects
annotations:
cnrm.cloud.google.com/project-id: logging-project-cso2 # kpt-set: ${logging-project-id}
cnrm.cloud.google.com/ignore-clusterless: "true"
internal.kpt.dev/upstream-identifier: 'iam.cnrm.cloud.google.com|IAMPolicyMember|projects|logging-sa-storageadmin-logging-project-id-permissions'
cnrm.cloud.google.com/blueprint: 'kpt-pkg-fn-live'
spec:
resourceRef:
apiVersion: resourcemanager.cnrm.cloud.google.com/v1beta1
kind: Project
name: logging-project-cso2 # kpt-set: ${logging-project-id}
# AC-3(7), AC-3, AC-16(2)
role: roles/storage.admin
member: "serviceAccount:logging-sa@kcc-cso-4380.iam.gserviceaccount.com" # kpt-set: serviceAccount:logging-sa@${management-project-id}.iam.gserviceaccount.com
add to script for super admin browsing
michael@cloudshell:~/kcc-cso/github/pubsec-declarative-toolkit (kcc-cso-4380)$ gcloud organizations add-iam-policy-binding 7...46 --member=user:mi..rg --role=roles/storage.admin --quiet > /dev/null 1>&1
Updated IAM policy for organization [734065690346].
Fix for core-landing-zone setters.yaml generation part of the script
michael@cloudshell:~/kcc-cso/github/pubsec-declarative-toolkit (kcc-cso-4380)$ git diff
diff --git a/solutions/setup.sh b/solutions/setup.sh
index 68c2763..4509441 100755
--- a/solutions/setup.sh
+++ b/solutions/setup.sh
@@ -240,7 +240,6 @@ metadata: # kpt-merge: /setters
name: setters
annotations:
config.kubernetes.io/local-config: "true"
- internal.kpt.dev/upstream-identifier: '|ConfigMap|default|setters'
data:
org-id: "${ORG_ID}"
lz-folder-id: "${ROOT_FOLDER_ID}"
@@ -257,10 +256,12 @@ data:
allowed-vpc-peering: |
- "under:organizations/${ORG_ID}"
logging-project-id: logging-project-${PREFIX}
- security-log-bucket: security-log-bucket-${PREFIX}
+ security-incident-log-bucket: security-incident-log-bucket-${PREFIX}
platform-and-component-log-bucket: platform-and-component-log-bucket-${PREFIX}
retention-locking-policy: "false"
retention-in-days: "1"
+ security-incident-log-bucket-retention-locking-policy: "false"
+ security-incident-log-bucket-retention-in-seconds: "86400"
michael@cloudshell:~/kcc-oi-20231206/kpt (kcc-oi-7970)$ kubectl get gcp -n logging NAME AGE READY STATUS STATUS AGE logginglogbucket.logging.cnrm.cloud.google.com/platform-and-component-log-bucket-oi0130 47h True UpToDate 47h logginglogbucket.logging.cnrm.cloud.google.com/security-log-bucket 47h True UpToDate 47h
NAME AGE READY STATUS STATUS AGE logginglogsink.logging.cnrm.cloud.google.com/logging-project-oi0130-data-access-sink 47h True UpToDate 47h logginglogsink.logging.cnrm.cloud.google.com/mgmt-project-cluster-platform-and-component-log-sink 47h True UpToDate 47h logginglogsink.logging.cnrm.cloud.google.com/org-log-sink-data-access-logging-project-oi0130 47h True UpToDate 47h logginglogsink.logging.cnrm.cloud.google.com/org-log-sink-security-logging-project-oi0130 47h True UpToDate 47h logginglogsink.logging.cnrm.cloud.google.com/platform-and-component-services-infra-log-sink 47h True UpToDate 47h logginglogsink.logging.cnrm.cloud.google.com/platform-and-component-services-log-sink 47h True UpToDate 47h
NAME AGE READY STATUS STATUS AGE monitoringmonitoredproject.monitoring.cnrm.cloud.google.com/kcc-oi-7970 47h True UpToDate 47h
see NAME AGE READY STATUS STATUS AGE storagebucket.storage.cnrm.cloud.google.com/security-incident-log-bucket-oi0130 3m44s True UpToDate 3m41s
One org obrien.industries is working with the log sinks the other newer org cloud-setup is not
the issue is likely missing IAM permissions on clean account cloud-setup.org - where an older org that even had an older hub-env is ok obrien.industries below
Update: same issue on 2nd org - looks like logging-sa needs roles/storage.admin
https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/blob/main/solutions/core-landing-zone/lz-folder/audits/logging-project/cloud-storage-buckets.yaml#L20 missing permissions that are already set on https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/blob/main/solutions/core-landing-zone/namespaces/logging.yaml#L82
both have logging-sa as loggingadmin at the org level
and monitoring admin at the kcc project level
setters.yaml
single service IAM issue