Closed sagarkrkv closed 7 years ago
That is just a configuration change, currently they are in same availability zone, lets try with same AZ for now, we could expend our autoscaling group after successful integration
How do we configure the steps to be run on each instance? i.e start consul and run api-server ?
There are two steps to it,
Though we have to update the launch configuration in AWS for the changes in userdata.sh to take effect
Can we assume that codeDeployInstall.sh will installed only on the primary Highly Available server and userdata.sh will be installed on the rest of the backup servers ?
I did not get your question, but let me clarify this, In our env setup both userdata.sh and codeDeployInstall.sh will run on all servers. But as you mentioned, we can configure the environment to include backup servers, which right now are not in place
OK, I understand. But we need to configure two separate deployment groups one for backup servers and one for the highly available server.
When the backup servers start, they will try to connect with the HA server and register themselves. For that we need a HA server with a fixed IP address.
First of all, i did not get the point of deploying a backup api server, as we are using spot instances, which are prone to go down any time, and do you want a dedicated host to run api - server (highly available instance) included in our environment ?
One more thing, the load balancer will not be run on spot instance, rather we will deploy it on a dedicated instance (on demand instance ) which has high availability.
I recommend, lets not create a dedicated instance for api-server, let the spot instances host api-server, only our load balancer will have dedicated instance. In this way, we can test the fauly tolerant ability of our load balancer.
What do u think..?
On Jan 24, 2017 7:30 PM, "Vidya Sagar Kalvakunta" notifications@github.com wrote:
OK, I understand. But we need to configure two separate deployment groups one for backup servers and one for the highly available server.
When the backup servers they start, they will try to connect with the HA server and register themselves. For that we need a HA server with a fixed IP address.
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/airavata-courses/spring17-API-Server/issues/2#issuecomment-274983934, or mute the thread https://github.com/notifications/unsubscribe-auth/AOOY4XIpt_o3_Am-SCrP4D6bAn_o8cNCks5rVpepgaJpZM4Ls36c .
Yea, that'll be a good idea.
So shall we create a separate branch for the load balancer ? As it'll need different install scripts.
I have added a commit de1d9ee1588df9b044cd6315809acdf4989c7ee4, to userdata.sh to start consul on each instance. We just need to modify <HA-server>
to point to the ip-address of the HA-server.
Creating a separate branch is the easy way out, because in the long run it will be hard to keep it synced with main branch. For now create a new branch and start development. Meanwhile i will research on alternate methods used.
@anujbhan Will the spot instances be spread across multiple aws regions or will they be concentrated in the same region ?