Open kinghajj opened 7 years ago
Thanks @kinghajj for the request. We appreciate hearing the details of your use case as it helps us prioritize the work.
While this is similar in scope to a few other issues (e.g. #3), it's specific and different enough that I think it merits its own issue.
Almost the exact same story for me. What I do is template out the elastic beanstalk environment in the dockerrun.aws.json when building the application before zipping and uploading to s3. The downside is that every environment requires its own application bundle just so I can support logging to the correct cloudwatch logs group.
Would really appreciate being able to use environment variables.
I just last night, spent an entire evening messing around with the cloudwatch logs agent and barely documented elastic beanstalk configuration options in order to try and get multi container logging working. I'm massively in favour of adding this.
this is nessacary for immutable builds between environments +1
This would be awesome to have! I also have a pretty much identical use case, managing multiple multi-container docker environments under the same Elastic Beanstalk application, and I set an environment variable (e.g. "dev", "stage", "prod") per Elastic Beanstalk environment. And was looking for an easy way to get container logs streaming to separate cloudwatch log streams.
Same goes here. Any update on this becoming a real feature. I don't want to generate different images for each environment as it defeats the whole purpose of docker, etc. Has anyone determined a work around? Our environments have different subnets, so we might be able to filter out by that. But not ideal at all.
Same issue here. I'm currently having to upload a different application version for each environment that I run an application on because of this.
I'd imagine one way that might work is if EB modifies the Dockerrun
file using some sort of simple templating engine using the EB environment's environment properties before the container definitions are shipped over to ECS. This might be easier to integrate into the EB code base instead of trying to figure out how to change how ECS reads the definition configuration.
Another vote here to have this option.
I'm still looking for this option.
@PettitWesley has provided a workaround for this in #74 using Firelens.
still looking for this, any updates on it being considered?
almost a year later - would be really great to have this!
+1
+1 ...
Coming up at 7 years of this
Oh lol, same here. another easy to implement feature that is just ignored for YEARS! AWS Team, do you need more employees? Feel free to contact the developers in this thread. Because obviously you have a hard time fixing easiest of issues
+1
At my company, we use multicontainer ElasticBeanstalk tiers for all of our environments. Currently we stream logs to SumoLogic via their agent; but, for our needs, CloudWatch logs would be sufficient and much cheaper. Each of our environments defines an environment variable "ENVIRONMENT", with values like "develop", "qa", etc. I found that recent ECS agents support streaming logs to CloudWatch natively via the awslogs driver. I attempted to do something like this:
Some background: when our CI system produces a build, it instantiates the Dockerrun.aws.json from a template defined in our source repository.
BUILD_TAG
is${branch}-${build_number}-${commit_hash_prefix}
. The CI system produces ZIP archives for the web and worker tiers, uploads them to S3, and registers them as application versions which can be reused against any of the other environments via a Slack bot (@bula deploy develop develop-38-abcd1234
).I don't want to hardcode
"awslogs-group": "/armada/tasks.log"
, because then all environments' logs would be grouped the same. When I attempted this, however, I got an errorCannotStartContainerError: API error (500): Failed to initialize logging driver: InvalidParameterException: 1 validation error detected: Value '\''/armada/$ENVIRONMENT/tasks.log'\'' at '\''logGroupName'\'' failed to satisfy constraint: Member must satisfy regular ex
.The only workaround I can think of would be to alter our CI system to not register application versions at build time, but just ZIP archives with Dockerrun.aws.json still in Jinja2 template form; and, in tandem, alter the deployment bot to download that, extract it, instantiate the Dockerrun.aws.json template with the target environment, and register a new application version, and update the target environment with it. That seems needlessly convoluted (and, frankly, ugly) to me. If there's some other, cleaner way to accomplish this, I'm open to any suggestions. But IMHO, it'd be quite nice to use environment variables in task definitions, to keep them easily reusable.