Closed aeneasr closed 6 years ago
I got it working by creating a new service account, not sure what was going on there...
@arekkas Which type of Service Account did you create to make it work?
Having the same issue. Not sure I want to create a new service account.
@arekkas Which type of Service Account did you create to make it work?
I don't quite remember, I think it was 1:1 the same settings.
I'm reopening this issue as apparently I'm not the only one experiencing it.
Hey, please give it another try see if there's still an issue, if it is:
Please send an email with following information to cloud-sql@google.com so I may be able to help out looking at some internal data.
with a link to this page, particular if you have left a message here, link to the message. describe the issue you have if the link doesn't cover all the details. the full name of your cloud sql instance. the command your run and the time you got the Error 403 the full logs
Currently we found a issue/bug in UI that when creating a service account with a role binded, if the creator doesn't have the privilege to bind a role, the UI will just create that service account without the role and no error returns. To make sure the desired rule is actually created: go to "IAM & admin" - "IAM" to check the service account has the correct (client) role.
Further, a project OWNER or "Project IAM Admin" (under "Resource Manager") will give the creator the privilege to bind a role to a service account.
We're having an issue with this as well when specifying multiple instances using gcr.io/cloudsql-docker/gce-proxy:1.09
as a sidecar container in kubernetes. I also tried the newest version 1.11. I'm specifying multiple like so:
-instances=foo:us-central1:bar=tcp:5432, foo:us-central1:baz=tcp:5555
In this case I can connect to bar but not baz. IAM service account has all CloudSQL privileges. I also tried it with Project Owner privileges with no luck. When specifying multiple instances locally I noticed I needed to wrap the comma separated list in single quotes. Doing this in the deployment resource seems to cause issues. I don't think this is the issue though since the logs in the container show that both are open for connections.
I have no issues connection to each instance individually with the service account.
Hi codyjroberts, would you mind try remove the "space" between the instance names?
-instances=foo:us-central1:bar=tcp:5432,foo:us-central1:baz=tcp:5555
Beautiful. Thank you 👍
how do you make another service account?
@thedayofawesome can you submit your question as a new issue? I think the problem described in this issue is resolved.
I had the same issue and creating a new service account with Cloud SQL Client
permissions was the solution.
Here's what worked for me -- I didn't need to create a service account:
gcloud auth application-default login
For some reason it changes how the login is handled.
Here's what worked for me -- I didn't need to create a service account:
gcloud auth application-default login
For some reason it changes how the login is handled.
You are a fantastic person. You bring joy to people when times are tough. Keep on keeping on
for anyone who is curious about gcloud auth application-default
and why it works.
It means Application Default Credentials (ADC) provide a method to get credentials used in calling Google APIs.
@sventech Thanks man, that worked for me. T
Here's what worked for me -- I didn't need to create a service account:
gcloud auth application-default login
For some reason it changes how the login is handled.
The real MVP
Here's what worked for me -- I didn't need to create a service account:
gcloud auth application-default login
For some reason it changes how the login is handled.
Worked like charm!!! Thanks a ton.
Created new service account key. It works. Very weird.
This issue could relate to billing as well, Just make sure you are up to date with payment.
Also had this issue today. Honestly been working with GCP and cloudsql for 3 years, and still getting this issue.
I am currently having this issue. I have created a new service account, activated it, attached cloudsql admin, storage admin and storage legacy owner permissions and yet I am still getting 403 http error. I have emailed google via cloud-sql@google.com . Hopefully i get an answer soon.
Had similar problems and I managed to solve it by adding Cloud SQL Viewer
in addition to Cloud SQL Client
to service account, and specify both -projects <projectid>
-instances <instance>=tcp:3306
-credential_file=<file>.json
-dir /tmp/cloudsql
In my case, on GCP console, Workloads YAML I see
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
I had to replace default with cloudsql-ksa. And on my helm file
rbac:
create: true
serviceAccountAnnotations: {
iam.gke.io/gcp-service-account: "cloudsql-gsa@my-app-pr-abcd12.iam.gserviceaccount.com"
}
serviceAccountName: "cloudsql-ksa"
serviceAccount: payphone-ksa
My problem was I had the cloud sql instance in a different project and I was adding Cloud SQL Client to the IAM role in the source project. Make sure to add the Cloud SQL Client IAM permissions for the service account in the source project to the role in the destination project.
In my case I had two service accounts one with storage and the other one with the compute engine and I was trying to use the credentials of the first account which obviously denied me the access.
I followed the tutorial connect-container-engine and also stumbled on this directory. Unfortunately, I can't get it to work as the logs show
I confirmed that the name
ory-cloud:europe-west1:ory-cloud-platform-sql
is the right one:I additionally checked that the client has the right privileges: