Repository for the course project for CSCI-B 649 Applied Distributed Systems in Spring 2022. This project is based on a multi-user system with a distributed architecture for consuming and processing weather data in various forms, logging its usage and presenting the weather information to the user in a web-based interface.
The project is in its initial stages of development and the README will be updated as new features are added.
The microservices from the architecture above is built with the following tech stack
The application is containerized and is hosted in docker registry. To setup the application run the docker-compose file.
Navigate to this folder and run the following command deploy the application in the Kubernetes cluster
kubectl apply -f kubernetes/. --recursive
To manually scale the application to 3 replicas run the following kube-scale.sh script file with -r as 3
bash kube-scale.sh -r 3
Jmeter Test details and Report
We need an instance to set up Ansible first. This can be a local or a remote machine. We chose a remote machine with Ubuntu image. Login to the machine and open the terminal
Create SSH-key.
sudo su
ssh-keygen
This will generate public and private keys. we will use this key later to SSH connect to the instances we create.
Copy the openrc.sh file to this instance. You can follow the steps in this Link to download the openrc file. And then Run the following command,
source <openrc.sh>
This will just save the environment variables to our localhost which we need to connect to the OpenStack python client.
Make sure you have a public IP available. This can be created in the exosphere environment or running the following command. Ignore this step if you have an public IP address available.
pip3 install python-openstackclient
openstack floating ip create public
Paste the IP address you created in the first line below and run the following code. We need to modify certain attributes from the cluster.tfvars file
Note: The shell script below only modifies the IP address and the rest of the above attributes are already set based on the cloud environment provided to us.
export IP=149.165.152.125
bash instance-creation.sh
What does the instance-creation.sh shell script do?
Kubernetes is installed successfully in our cluster( at the end of the above script you will be logged into the master node) and all the necessary deployment files are copied to the master node.
cd deploy
bash deploy.sh
To test the setup, you can ssh to the master machine and check the nodes connected. (Optional)
ssh ubuntu@$IP
sudu su
kubectl get nodes
Custos is a software framework that provides common security operations for science gateways, including user identity and access management, gateway tenant profile management, resource secrets management, and groups and sharing management. It is implemented with scalable microservice architecture that can provide highly available, fault-tolerant operations. Custos exposes these services through a language-independent API that encapsulates science gateway usage scenarios.
For this project the Custos framework was initially deployed on Jetstream instances and later stress and load testing was done based on the testing strategies mentioned in the course lecture.
Create a ssh key pair in your local machine and copy the public key to the Jetstream settings (https://use.jetstream-cloud.org/application/settings). So that when a new instance is created your local machine’s public key will be registered for authorized keys.
Spawn five Jetstream - 1 Ubuntu 20.04 LTS machines of medium size and with the following configurations
Deployment of cert-manager, keycloak, consul, vault, mysql and custos was adapted from the procedure from the following link (https://github.com/airavata-courses/DSDummies/wiki/Project-4)
Some of the load testing scenarios and the results obtained are shown below
• Register Users Endpoint
We load tested the register user endpoint of custos using two different loads
o 100 concurrent requests (threads) o 500 concurrent requests (threads)
In this case, we saw that the average response time was around 19ms for each register requests. And, Custos had a throughput of 164.998 requests/min
After this we incremented the load to 500 requests. Results are below
In this case, we saw that the average response time was around 91ms for each register requests. And, Custos had a throughput of 162.618 requests/min which is same as when the load was 100 requests.
• Create_group Endpoint
We load tested the create group endpoint of custos using two different loads –
o 100 concurrent requests (threads) o 500 concurrent requests (threads)
In this case, we saw that the average response time was around 14ms for each create group requests. And, Custos had a throughput of 221.435 requests/min.
In this case, we saw that the average response time was around 93ms for each create group requests. And, Custos had a throughput of 147.226 requests/min. So, as the load was increased the response time increased, and the throughput also reduced. But, it was easily able to handle 500 requests.
custos_host: js-156-79.jetstream-cloud.org custos_port: 30367
For analyzing and comparing the results from both the deployments we did the 500 requests test for both the register user and create group endpoints to see the difference.
500 requests test for register endpoint
In our custos deployment, we got an average response time of 100ms which is similar to the dev custos deployment, The throughput was a little more at 157.252 requests/min.
• We tried to deploy custos was initially on Jetstream 2 for a couple of times and it failed. Later we had to switch to Jetstream 1 to setup the instance