Open jpeters71 opened 7 years ago
The package
command doesn't make any api calls to resolve things. It inherits the --profile
argument, but it's vestigial so we should remove it. Or are you saying that specifying profile in the deploy is failing?
I'm sorry, you're right: it's during the deploy. The error I get back is "Unable to create authorizer 'MyProvider': ProviderARNs need to be valid Cognito Userpools. Invalid ARNs-...
However, I can get it to deploy if I change my default profile in my ~/.aws/credentials to look like whatever profile I use with the '--profile' parameter in the deploy call.
Let me make sure I'm understanding you correctly. You have a Chalice app that looks something like this:
from chalice import Chalice
from chalice import CognitoUserPoolAuthorizer
app = Chalice(app_name='resources')
authorizer = CognitoUserPoolAuthorizer(
'MyPool', provider_arns=['arn:aws:cognito-idp:us-west-2:...:userpool/name'])
@app.route('/', methods=['GET'], authorizer=authorizer)
def index():
return {'success': True}
The pool arn:aws:cognito-idp:us-west-2:...:userpool/name
is controlled by an account under the profile chalice-dev-profile
. Your default
profile is a different account.
When running chalice deploy --profile chalice-dev-profile
you get this error:
An error occurred (BadRequestException) when calling the ImportRestApi
operation: Errors found during import:
Unable to create authorizer
'MyPool': ProviderARNs need to be valid Cognito Userpools. Invalid ARNs-
arn:aws:cognito-idp:us-west-2:...:userpool/name
So to fix that, you manually set your default
profile to have the correct credentials, which works. Is that about right?
Running that sample app, I was unable to reproduce the issue. I did get that error when I used the wrong account, but not when I specified the correct profile.
What chalice version are you running?
That's close. The only difference is we're actually looking up the provider_arns from a dynamo_db which is different based on the profile. So something like:
authorizer = CognitoUserPoolAuthorizer(
'MyPool', provider_arns=[get_arn_from_dynamo_for_current_profile()])
To be clear, the get_arn_from_dynamo_for_current_profile() function should return a specific ARN for one profile (i.e. "dev") and it returns a different ARN for a different profile (i.e. "production"). However, setting the --profile flag when chalice is running does not affect this call; we end up connecting to the dynamo instance for whatever our default profile is and not what was specified in the --profile parameter.
The whole goal here is to not store sensitive information in a config file or code. Our call to the dynamo db actually uses the AWS keystore to encrypt/decrypt values in dynamo. It's all predicated on what profile we're using, though.
Ah, I see what the problem is. get_arn_from_dynamo_for_current_profile
is being run when Chalice imports your code, but we aren't doing anything to pass the profile along there. We should definitely update that. Thanks for helping me figure out the issue!
Recently ran into this issue when doing some work using a secondary profile. Setting AWS_DEFAULT_PROFILE fixes the issue but it is certainly unexpected behavior given that the user is explicitly providing a profile on the CLI.
If this is relatively straight forward to fix, I'm happy to submit a PR.
It looks like when you specify the --profile flag, it still uses the AWS default profile while building up the package swagger json. For deployment, it does use the --profile flag to deploy to the correct environment, however.
Here's the use case: in our app.py we are defining a CognitoUserPoolAuthorizer() to be used. However, the cognito user pool arn differs by environment/profile. We've setup a dynamo DB to store certain sensitive configuration values (https://github.com/fugue/credstash) like our cognito pool arn. So for our dev environment, we've setup a dev profile that contains the correct values in the dynamo DB for that environment; prod has a different profile/different values; etc. When we build, even though we've set the --profile flag properly, if we don't set the AWS default profile to the right values, it fails to deploy.
The problem seems to be that those values need to be resolved when generating the swagger document and during that time, it looks like the --profile flag isn't getting used.