Open crspybits opened 3 years ago
I'm using eb deploy
with a new bundle.zip. i.e., a new .elasticbeanstalk/config.yml
deploy artifact. This deploy artifact contains the new Server.json and a reference to the new Docker container.
My configure.yml
file includes these lines:
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
InstanceType: t2.micro
EC2KeyName: amazon1
aws:autoscaling:asg:
MaxSize: '1'
aws:elasticbeanstalk:environment:
EnvironmentType: LoadBalanced
LoadBalancerType: classic
ServiceRole: aws-elasticbeanstalk-service-role
EB deployment policies are listed here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html
(From that reference) "By default, your environment uses all-at-once deployments. If you created the environment with the EB CLI and it's a scalable environment (you didn't specify the --single option), it uses rolling deployments."
The following is from the AWS EB Web UI for my server:
clearly, it's using rolling updates.
Still from that reference above: "Rolling – Avoids downtime and minimizes reduced availability, at a cost of a longer deployment time. Suitable if you can't accept any period of completely lost service. With this method, your application is deployed to your environment one batch of instances at a time. Most bandwidth is retained throughout the deployment."
I think this means that with rolling updates, the same EC2 instances are used, and the new Docker container is just deployed to those instances. That's consistent with other parts of the docs I'm reading. Such as: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html#environments-cfg-rollingdeployments-method
Rolling updates vs. deployment: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rollingupdates.html
"To maintain full capacity during deployments, you can configure your environment to launch a new batch of instances before taking any instances out of service. This option is known as a rolling deployment with an additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of instances." (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html)
Conclusion: I believe if new EC2 instances are launched with each deployment, I'll not have this problem. It looks like rolling updates are one way to do this.
"Deployment option namespaces": Including RollingWithAdditionalBatch
-- see https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html
The following example is there too:
option_settings:
aws:elasticbeanstalk:command:
DeploymentPolicy: RollingWithAdditionalBatch
BatchSizeType: Fixed
BatchSize: 5
For aws:elasticbeanstalk:command
, see:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elasticbeanstalkcommand
Here are the results of the deploy:
MacBook-Pro-4:neebla-02-production chris$ ./deploy.sh
Alert: The platform version that your environment is using isn't recommended. There's a recommended version in the same platform branch.
Uploading neebla-02-production/app-210606_145849.zip to S3. This may take a while.
Upload Complete.
2021-06-06 20:58:50 INFO Environment update is starting.
2021-06-06 20:59:30 INFO Rolling with Additional Batch deployment policy enabled. Launching a new batch of 1 additional instance(s).
2021-06-06 21:01:24 INFO Batch 1: 1 EC2 instance(s) [i-0ee6788a6f604b1c0] launched. Deploying application version 'app-210606_145849'.
2021-06-06 21:02:00 INFO Successfully pulled crspybits/syncserver-runner:1.10.5
2021-06-06 21:02:03 INFO Successfully built aws_beanstalk/staging-app
2021-06-06 21:02:13 INFO Docker container afb403f5a66a is running aws_beanstalk/current-app.
2021-06-06 21:04:20 INFO Batch 1: Completed application deployment.
2021-06-06 21:04:20 INFO Command execution completed on 1 of 2 instances in environment.
2021-06-06 21:05:26 INFO Terminating excess instance(s): [i-00c4f808416901a91].
2021-06-06 21:05:29 INFO Excess instance(s) terminated.
2021-06-06 21:05:31 INFO New application version was deployed to running EC2 instances.
2021-06-06 21:05:32 INFO Environment update completed successfully.
I used eb ssh
to connect into the EC2 instance and the server logs look right now too. No more extraneous uploader.
So this has patched the issue. But I'd still like to know why a timer-based thread in my server can survive Docker container updates on an EC2 instance. Asked a question on this: https://stackoverflow.com/questions/67863943/swift-server-timer-based-thread-survives-docker-container-redeploy-on-aws
I just tailed logs on the server running on AWS, and see:
This is ongoing-- from a running process, not stale logs. Note that it is referencing the mySQL server:
syncserver-dev2.<SNIP>.us-west-2.rds.amazonaws.com
. This is a stale reference. The AWS mySQL RDS instance was only present for a brief interval while I was migrating. It is no longer running and thus it's not surprising that an access to it is failing.What is surprising is that there is a reference to this old server at all. The currently deployed Server.json configuration does not contain a reference to this mySQL RDS instance.
I believe this is occurring as a result of:
a) The manner in which I'm deploying updates to the server using AWS Elastic Beanstalk tools, and b) The new architecture of the server which runs a timer-based thread to do processing of deferred uploads.
Somehow that timer-based thread is surviving the deploy of the new server at least in some cases.