Open jgardezi opened 7 years ago
Hey @jgardezi, what's your use-case? You just want to have different variables depending on the stage?
@nikgraf yes that is my use case. Need different variables depending on the stage. If possible can you please post example.
might take me a while to get to it, currently catching up with issues and PRs
As I understand it, I think the OP wants to be able to deploy to, say, stage test
, and have that automatically change the value for certain ENV VARs.
@nikgraf Do you mind if I try and answer this and you tell me if I'm close or not?
So: My function calls some endpoint to get some info. I have Dev, Test and Prod endpoints like:
dev.infoprovider.com
test.infoprovider.com
prod.infoprovider.com
If I use the config below will that automatically route requests based on the stage / environment?
service: my-function-stuff
frameworkVersion: ">=1.2.0 <2.0.0"
provider:
name: aws
runtime: nodejs4.3
environment:
ENDPOINT: "${opt:stage}.infoprovider.com"
functions:
getInfo:
handler: handler.getInfo
...
In general, this seems a pretty standard use case but it's an area where naming semantics confuse things and it can take a while before you get to the 'aha' moment so, yeah, a simple example would reduce the time-to-aha for a lot of people.
Theres a lot of examples using env variables and opt, but the Readme's don't cover those steps. Could be some improvements made this way to : https://github.com/serverless/examples/tree/master/aws-node-rest-api-with-dynamodb
Below are the steps I have done to solve this.
1- Create env.yml
file. The structure of this file looks like this.
`
// env.yml
local:
host: localhost
port: 5432
name: mydb
user: root
password: secret
dev: host: db.myserver.com port: 5432 name: mydb user: root password: secret `
In the env.yml
file there are two environment local
and dev
. local
is testing on local development environment variables. dev
is for the cloud hosting environment.
2- To reference env.yml
in serverless.yml
so that we deploy microservice based on the env variables i.e. sls deploy --s local
or sls deploy --s dev
// serverless.yml provider: stage: local environment: ${file(./env.yml):${opt:stage, self:provider.stage}}
That's all we need, to configure the multi-env. I have customised it further so that we don't need to run sls deploy --s local
. The environment
picks automatically local
and all we need is sls deploy
command.
3- Lastly, if you need to refer the variables in .js files all you need to do is call them like
// PostgreSQL database connection parameters: var config = { host: process.env.host, port: process.env.port, database: process.env.name, user: process.env.user, password: process.env.password, };
I hope above solves lot of issues.
Kind regards, Javed Gardezi
I am trying to replicate the above example to populate some of the variable values for a function that are extracted from the environment variables but I am getting an error. Example: serverless.yml(partial file) environment: ${file(./env.yml):${opt:stage, self:provider.stage}}
iamRoleStatements:
Error:Trying to populate non string value into a string for variable ${env:s3.report.input.path}. Please make sure the value of the property is a string.
I have the s3.report.input.path in env.yml file dev: s3.report.input.path: 'path_value'
Any help in resolving would be very helpful. [[update]] edited ..env.yml path to .env.yml
@sandeeprao where is the .env.yml located? I see you ref. it once at the top using ./env.yml and then another time using ../env.yml
Can you check the path?
Serverless ane env.yml are both located in the same folder. I modified both to have the same path but it's the same issue.
What i did for this was:
With this configuration, i can have multiple env (${opt.stage} comes from serverless.yml provider.stage info), secure env variables (like db user or pwd) and version control over the encrypted file for easy collaboration.
I think using things like ${file(./env.yml):${opt:stage}.s3.report.input.path}
and serverless-secrets-plugin
is rather extraneous for something that's very common, this functionality should be in "core" with a recommended best practice. At the very least there should be a very easy and straightforward way to specify different environment variables (i.e. Database URI, access tokens etc.) per stage.
As of August 2017, serverless-secrets-plugin hasn't been updated for 9 months.
+1 for this
have any recommended solutions comes about for this?
I recommend self-referencing custom:
variables within the serverles.yml paired with something like AWS Systems Manager Parameter Store (ssm). This simplifies things keeping the configuration in a single file, and still allows us to commit everything without worrying about leaking secure information.
service: example
provider:
name: aws
vpc:
securityGroupIds:
- sg-xxxxxxxx
subnetIds:
- subnet-xxxxxxxx
- subnet-xxxxxxxx
- subnet-xxxxxxxx
runtime: python3.6
stage: local # or QA or PRD
region: us-east-69
memorySize: 512
timeout: 60
versionFunctions: false
endpointType: REGIONAL
environment:
ONE: ${self:custom.one.${self:provider.stage}}
TWO: ${self:custom.two.${self:provider.stage}}
THREE: ${self:custom.three.${self:provider.stage}}
FOUR: ${self:custom.four.${self:provider.stage}}
FIVE: ${self:custom.five.${self:provider.stage}}
SIX: ${self:custom.six.${self:provider.stage}}
functions:
example-one:
handler: handler.example_one
name: example-one-${self:provider.stage}
description: An example for setting variables based on stage deployed.
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
vendor: ../packages
one:
local: "one.local"
qa: "one.qa"
prd: ${ssm:/example/prd/one} #get from ssm
two:
local: "two.local"
qa: "two.qa"
prd: ${ssm:/example/prd/two} #get from ssm
three:
local: "three.local"
qa: "three.qa"
prd: ${ssm:/example/prd/three} #get from ssm
four:
local: "four.local"
qa: "four.qa"
prd: ${ssm:/example/prd/four} #get from ssm
five:
local: "five.local"
qa: "five.qa"
prd: ${ssm:/example/prd/five} #get from ssm
six:
local: "six.local"
qa: "six.qa"
prd: ${ssm:/example/prd/six} #get from ssm
I agree with using SSM, but I decided to just use an encryption key to encrypt a file with all my secrets. That way I can commit them to git, and only have one secret to manage. For small projects this works well. This app shows how this works in practice: https://github.com/mikestaub/slack-lunch-club
@bubba-h57 I like your solution, but if I have encrypted variables and try to decrypt them with a ~true
suffix as per the docs like so:
one:
local: "one.local"
qa: "one.qa"
prd: ${ssm:/example/prd/one~true}
I find that the actual decrypted value is stored in the lambda function environment variables. This means if you go into the AWS Console and navigate to your lambda function, you can actually see the encrypted values in plain text - not good!
I thought, perhaps you could instead set the ~true
suffix in the environment section like so:
environment:
ONE: ${self:custom.one.${self:provider.stage}~true}
But this doesn't seem to work, possibly because now serverless has no clue the environment variable is from SSM.
Currently my work around solution is to store the variables in lambda in a special format (i.e. ssm:/foo
), and then when I detect ssm:
at runtime, I do a ssm.getParameter call (with WithEncryption
set to true
) via the AWS.SSM sdk and get things decrypted that way. Can anyone see anything wrong with this? I'll happily share the code if people think this is an ok solution.
Hi,
Can you anyone provide an examples for Development, Staging and Production variables using this documentation https://serverless.com/framework/docs/providers/aws/guide/variables/.
Kind regards, Javed Gardezi