This article will be an end-to-end guide for installing Tazama to any cluster but only once your EKS infrastructure is setup.
Read through the infrastructure spec before starting with the deployment guide.
Infrastructure Spec for Tazama Sandbox
Infrastructure Spec for Tazama
Important: Access to the Tazama GIT Repository is required to proceed. If you do not currently have this access, or if you are unsure about your access level, please reach out to the Tazama Team to request the necessary permissions. It's crucial to ensure that you have the appropriate credentials to access the repository for seamless integration and workflow management.
Our repository list includes a variety of components, each representing specific microservices and tools within our ecosystem. You need to create these in your AWS env in the ECR service.
Repository list:
Default release version
: rel-1-0-0
rule-001-<release version>-<envName variable set in jenkins>
rule-002-<release version>-<envName variable set in jenkins>
rule-003-<release version>-<envName variable set in jenkins>
rule-004-<release version>-<envName variable set in jenkins>
rule-006-<release version>-<envName variable set in jenkins>
rule-007-<release version>-<envName variable set in jenkins>
rule-008-<release version>-<envName variable set in jenkins>
rule-010-<release version>-<envName variable set in jenkins>
rule-011-<release version>-<envName variable set in jenkins>
rule-016-<release version>-<envName variable set in jenkins>
rule-017-<release version>-<envName variable set in jenkins>
rule-018-<release version>-<envName variable set in jenkins>
rule-021-<release version>-<envName variable set in jenkins>
rule-024-<release version>-<envName variable set in jenkins>
rule-025-<release version>-<envName variable set in jenkins>
rule-026-<release version>-<envName variable set in jenkins>
rule-027-<release version>-<envName variable set in jenkins>
rule-028-<release version>-<envName variable set in jenkins>
rule-030-<release version>-<envName variable set in jenkins>
rule-044-<release version>-<envName variable set in jenkins>
rule-045-<release version>-<envName variable set in jenkins>
rule-048-<release version>-<envName variable set in jenkins>
rule-054-<release version>-<envName variable set in jenkins>
rule-063-<release version>-<envName variable set in jenkins>
rule-074-<release version>-<envName variable set in jenkins>
rule-075-<release version>-<envName variable set in jenkins>
rule-076-<release version>-<envName variable set in jenkins>
rule-078-<release version>-<envName variable set in jenkins>
rule-083-<release version>-<envName variable set in jenkins>
rule-084-<release version>-<envName variable set in jenkins>
rule-090-<release version>-<envName variable set in jenkins>
rule-091-<release version>-<envName variable set in jenkins>
jenkins-inbound-agent
event-director-<release version>-<envName variable set in jenkins>
event-sidecar-<release version>
lumberjack-<envName variable set in jenkins>
tms-service-<release version>-<envName variable set in jenkins>
transaction-aggregation-decisioning-processor-<release version>-<envName variable set in jenkins>
typology-processor-<release version>-<envName variable set in jenkins>
event-director-<release version>-<envName variable set in jenkins>
This guide will walk you through the setup of the Tazama (Real-time Antifraud and Money Laundering Monitoring System) on a Kubernetes cluster using Helm charts. Helm charts simplify the deployment and management of applications on Kubernetes clusters. We will deploy various services, ingresses, pods, replica sets, and more.
eks-terraform
folder and follow steps in the README.md for the detailed steps.The installation of our system requires the creation of specific namespaces within your cluster. These namespaces will be automatically created by the infra-chart Helm chart. Ensure these namespaces exist before proceeding:
cicd
development
ingress-nginx
processor
ecks-ns
aws-ns
If they are not created automatically, you can manually add them using the following command for each namespace:
kubectl create namespace <namespace-name>
First, add the Tazama Helm repository to enable the installation of charts:
The list below are the different helm charts:
Optional - Please note that these are additional features; while not required, they can enhance the platform's capabilities. Implementing them is optional and will not hinder the basic operation or the end-to-end functionality of the platform.
ie: Another HELM chart exists for the clustered version of ArangoDB. However, the single deployment version is preferred over the clustered one because it includes functionality that is absent or required in the enterprise option.
https://github.com/tazama-lf/EKS-helm
First, add the Tazama Helm repository to enable the installation of charts:
helm repo add Tazama https://tazama-lf.github.io/EKS-helm/
helm repo update
To confirm the FRMS repo has been successfully added:
helm search repo Tazama
To expose services outside your cluster, enable ingress on necessary charts:
helm install kibana Tazama/kibana --namespace=development --set ingress.enabled=true
...
If you prefer not to configure an ingress controller, you can simply use port forwarding to access the front-end interfaces of your applications. This approach will not impact the end-to-end functionality of your system, as it is designed to utilize fully qualified domain names (FQDNs) for internal cluster communication.
The Tazama system is composed of multiple Helm charts for various services and components. These need to be installed in a specific order due to dependencies.
helm install infra-chart Tazama/infra-chart
helm repo update
helm install nginx-ingress-controller Tazama/ingress-nginx --namespace=ingress-nginx
helm install elasticsearch Tazama/elasticsearch --namespace=development
helm install kibana Tazama/kibana --set ingress.enabled=true --namespace=development
helm install apm Tazama/apm-server --namespace=development
helm install logstash Tazama/logstash --namespace=development
helm install arangodb-ingress-proxy Tazama/arangodb-ingress-proxy --namespace=development
helm install arango Tazama/arangodb --set ingress.enabled=true --namespace=development
helm install redis-cluster Tazama/redis-cluster --namespace=development
helm install nats Tazama/nats --set ingress.enabled=true --namespace=development
helm repo add jenkins https://charts.jenkins.io
helm repo update
helm install jenkins jenkins/jenkins --set ingress.enabled=true --namespace=cicd
Navigate to the Jenkins UI, username admin
and retrieved password to login. Go to Manage Jenkins
, Under System Configuration
, select Plugins
and install the Configuration File
, Nodejs
and Docker
plugins that will enable later configuration steps.
Setup Notes for Deploying AWS ECR Credentials with Helm
To deploy the AWS ECR credentials using our Helm chart, you will need to provide the ECR registry URL, your access key ID, and your secret access key. These are sensitive credentials that allow Kubernetes to pull your container images from AWS ECR. This installation will create the frmpullsecrets that will be used to pull images from the ECR.
https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html
123456789012.dkr.ecr.region.amazonaws.com/my-repository
.helm install
command to deploy your chart, substituting in your ECR registry URL and the base64-encoded AWS credentials. For example:helm install aws-ecr Tazama/aws-ecr-credential \
--set aws.ecrRegistry="123456789012.dkr.ecr.region.amazonaws.com/my-repository" \
--set aws.accessKeyId="AKIAIOSFODNN7EXAMPLE" \
--set aws.secretAccessKey="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
Caution: Never expose your AWS access key ID or secret access key in version control or in a public setting.
For optional components like Grafana, Prometheus, Vault, and KeyCloak, use similar commands if you decide to implement these features.
Extra Information: https://helm.sh/docs/helm/helm_install/
If you need to remove the Tazama deployment:
helm uninstall Tazama
For a system utilizing a variety of Helm charts, optimizing performance, storage, and configuration can significantly impact its efficiency and scalability. Below are details on how to configure and optimize each component for your FRMS system:
nats-streaming
stateful set with appropriate volume sizes.max_connections
, max_payload
, and write_deadline
settings for better performance.number_of_shards
and number_of_replicas
, based on your use case.maxmemory
policies and replication settings for optimal performance.Each of these components plays a critical role in the Tazama system. By carefully configuring and optimizing them according to the guidelines provided, you can ensure that your system is secure, scalable, and performs optimally. Always refer to the official documentation for the most up-to-date information and advanced configuration options.
In order to get the processor pods to write logs to the lumberjack deployment which then writes the log information to elasticsearch.
development
to processor
Example
apiVersion: v1
kind: Secret
metadata:
name: elasticsearch-master-certs
namespace: processor
type: kubernetes.io/tls
data:
ca.crt: >-
tls.crt: >-
tls.key: >-
Secure your ingress with TLS by creating a tlscomsecret in each required namespace:
You can generate a self-signed certificate and private key with this command;
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=${HOST}/O=${HOST}" -addext "subjectAltName = DNS:${HOST}"
apiVersion: v1
kind: Secret
metadata:
name: tlscomsecret
namespace: development
type: kubernetes.io/tls
data:
tls.crt: <base64-encoded-cert>
tls.key: <base64-encoded-key>
Or
You can use kubectl to create the secret by running the command below;
kubectl create secret tlscomsecret ${CERT_NAME} --key tls.key --cert tls.crt -n development
development
, processor
, cicd
, default
).Customize your ingress resources to match your domain names and assign them to the nginx-ingress-controller's IP address:
Using example.test.com
as an example
Please see the example below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cicd-ingress
namespace: cicd
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/cors-allow-headers: X-Forwarded-For
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/use-regex: 'true'
...
spec:
tls:
- hosts:
- example.test.com
secretName: tlscomsecret
rules:
- host: example.test.com
...
Please see the TMS example below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
namespace: processor
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/cors-allow-headers: X-Forwarded-For
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
tls:
- hosts:
- example.test.com
secretName: tlscomsecret
rules:
- host: example.test.com
http:
paths:
- path: /execute
pathType: ImplementationSpecific
backend:
service:
name: transaction-monitoring-service-rel-1-0-0
port:
number: 3000
- path: /
pathType: ImplementationSpecific
backend:
service:
name: transaction-monitoring-service-rel-1-0-0
port:
number: 3000
- path: /natsPublish
pathType: ImplementationSpecific
backend:
service:
name: nats-utilities-rel-1-0-0
port:
number: 3000
After installing the Vault chart, you'll need to initialize and unseal Vault manually. The process involves generating unseal keys and a root token which you'll use to access and configure Vault further.
For now vault has been integrated into the system but the Jenkins variables haven't been add to vault to be pull through . This will be done sometime.
https://developer.hashicorp.com/vault/docs/concepts/seal
If LogLevel
is set to info, error etc.. in your Jenkins environment variables then you will need configure this.
For comprehensive instructions on how to configure logging to Elasticsearch, please refer to the accompanying document. It provides a step-by-step guide that covers all the necessary procedures to ensure your logging system is properly set up, capturing and forwarding logs to Elasticsearch. This includes configuring log shippers, setting up Elasticsearch indices, and establishing the necessary security and access controls. By following this documentation, you can enable efficient log management and monitoring for your services.
If APMActive
is set to true (default:true) in your Jenkins environment variables then you will need configure this
Once configured, the APM tool will begin collecting data on application performance metrics, such as response times, error rates, and throughput, which are critical for identifying and resolving performance issues. The collected data is sent to the APM server, where it can be visualized and analyzed. For detailed steps on integrating and configuring APM with your Jenkins environment, please refer to the specific APM setup documentation provided in your APM tool's resources.
The following sections of the guide require you to work within the Jenkins UI. You can either access the UI through a doamin if you configured an ingress or by port forwarding.
Port forward Jenkins to be accessible on localhost:8080 by running:
kubectl --namespace cicd port-forward svc/jenkins 8080:8080
Get your 'admin' user password by running:
kubectl exec --namespace cicd -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
Credentials are critical for Jenkins to interact with other services like source control management systems (like GitHub), container registries, and Kubernetes clusters securely. Jenkins provides a centralized credentials store where you can manage all these credentials. Here's a step-by-step guide based on the images you've provided:
To retrieve your ECR creditials for the aws user you can follow this guide:
Or run this command to retrieve your token:
aws ecr get-login-password --region < --region of the ECR>
The above comand will print out a token copy that token which is used as your password.
Example
Username: AWS
Password:
To configure Jenkins to use Kubernetes secrets for authenticating with Kubernetes services or private registries, you can follow these steps, similar to setting up GitHub package read access:
scjenkins-secret
.
scjenkins-secret
in namespace=processor into the Secret field.kubernetespro
. This ID will be used to reference these credentials within your Jenkins pipelines or job configurations.Following this process will allow Jenkins jobs to authenticate with Kubernetes using the token stored in the secret, enabling operations that require Kubernetes access or pulling images from private registries linked to your Kubernetes environment.
Navigate to Manage Jenkins → Managed files
The image shows a Jenkins configuration screen for adding a managed file, specifically an NPM config file (npmrc). Here's a breakdown of the steps and fields:
frmscoe
tazama-lf
Content: The text area labeled 'Content' is where you can input the actual content of the .npmrc file. This content typically includes configuration settings like the registry URL, authentication tokens, and various other npm options. always-auth = false will not be always required (usually for public registries).
Add: After configuring all the fields, you would click "Add" to save this managed file configuration.
Once you've added this managed file, Jenkins can use it in various jobs that require npm to access private packages or specific registries. The managed file will be placed in the working directory of the job when it runs, ensuring that npm commands use the provided configuration.
Navigate to Manage Jenkins → Tools
Navigate to Manage Jenkins → Tools
This needs to be completed before adding the Jenkins Cloud agent.
Please follow the following document to help you build and push the image to the container registry.
Building the Jenkins Agent Image
Navigate to Manage Jenkins → Clouds → Kubernetes settings
Add the Path to Your Kubernetes Instance: Enter the URL of your Kubernetes API server in the Kubernetes URL field. This allows Jenkins to communicate with your Kubernetes cluster. Leave deafult value set to https://kubernetes.default/
Disable HTTPS Certificate Check: If your Kubernetes cluster uses a self-signed certificate or you are in a development environment where certificate validation is not critical, you can disable the HTTPS certificate check. However, for production environments, it is recommended to use a valid SSL certificate and leave this option unchecked for security reasons.
Add Kubernetes Namespace: Enter cicd
in the Kubernetes Namespace field. This is where your Jenkins agents will run within the Kubernetes cluster.
Add Your Kubernetes Credentials: Select the credentials you have created for Kubernetes access. These credentials will be used by Jenkins to authenticate with the Kubernetes cluster. Select None
Test connection button and ensure it's a positive connection
Select WebSocket: Enabling WebSocket is useful for maintaining a stable connection between Jenkins and the Kubernetes cluster, especially when Jenkins is behind a reverse proxy or firewall.
Add Jenkins URL: This should be the internal service URL for Jenkins within your Kubernetes cluster, like http://jenkins.cicd.svc.cluster.local
Add Pod Label: Labels are key-value pairs used for identifying resources within Kubernetes. Here, you should add a label with the key jenkins
and the value agent
. This label will be used to associate the built pods with the Jenkins service.
Add a Pod Template: This step involves defining a new pod template, which Jenkins will use to spin up agents on your Kubernetes cluster.
A new pod template can be created but we'll use the existing / default one and edit to add the values below
Name: Name the pod template jenkins-builder
. This name is used to reference the pod template within Jenkins pipelines or job configurations.
Namespace: Specify cicd
as the namespace where the Jenkins agents will be deployed within the Kubernetes cluster.
Labels: Set jenkins-agent
as the label. This is a key identifier that Jenkins jobs will use to select this pod template when running builds.
Add a Container: In this part of the configuration, you define the container that will run inside the pod created from the pod template.
NOTE This needs to point to the docker image built in this step : Building the Jenkins Agent Image
jnlp
. This is a conventional name for a Jenkins agent container that uses the JNLP (Java Network Launch Protocol) for the master-agent communication./home/jenkins/agent
. This is the directory inside the container where Jenkins will execute the build steps.Run in Privileged Mode: This is an advanced container setting that allows processes within the container to execute with elevated privileges, similar to the root user on a Linux system.
To select "Run in Privileged Mode" in Jenkins Kubernetes plugin:
Image Pull Secret
Needs to be set to - frmpullsecret - see screenshot below
ImagePullSecrets
section.Name
field. This secret should already exist within the same namespace as where your Jenkins builder pods are running. Vlaue of the secret is frmpullsecret
By properly configuring image pull secrets in your Jenkins Kubernetes pod templates, you enable Jenkins to pull the necessary private images to run your builds within the Kubernetes cluster. Without these secrets, the image pull would fail, and your builds would not be able to run.
Passwords: These passwords can be found in your Kubernetes Cluster Secrets, which are autogenerated when the HELM installations are carried out.
Multiple ArangoDB passwords and endpoints: The reason we have different names and passwords for ArangoDB is to keep things organized and safe. Each name points to a different part of the database where different information is kept. Just like having different keys for different rooms. This is useful when you have more than one ArangoDB running at the same time and you want to keep them separate. This way, you can connect to just the part you need.
If you have a single database instance you may be wondering why multiple password variants are needed. For example, if my Configuration
, Pseudonyms
and TransactionHistory
databases are served from the same Arango instance, why must I include single quotes in their password input whereas that requirement was not needed in the ArangoPassword
variable.
The ArangoPassword
variable is utilised as a CLI argument by newman
, for setting up the environment. Where it is called, there is some shell substitution of the ArangoPassword
variable but because the substitution involves a special character, $
, that has to be surrounded by quotes.
newman {omitted} "arangoPassword=${ArangoPassword}" --disable-unicode
The same reasoning applies to passwords are that explicitly stated to need a single quote around them as they are substituted as is in processors' environments. This means that if your password contains special characters, then you must use single quotes to let the decoder know to interpret them as raw strings, or it will be taken as an indication of substitution
APMActive
: A flag to enable or disable Application Performance Monitoring.
ArangoPassword
: A secret password required for accessing the Database which is used for populating the Arango configuration.
ArangoConfigurationURL
: Endpoint for the ArangoDB configuration Database.
ArangoConfigurationPassword
: A secret password required for accessing the Database. NB: The single quotes need to be added with your password.
ArangoDbURL
: Endpoint for the ArangoDB configuration Database.
ArangoDbPassword
: A secret password required for accessing the Database. NB: The single quotes need to be added with your password.
ArangoPseudonymsURL
: Endpoint for the ArangoDB pseudonym Database.
ArangoPseudonymsPassword
: A secret password required for accessing the Database. NB: The single quotes need to be added with your password.
ArangoTransactionHistoryURL
: Endpoint for the ArangoDB transaction history Database.
ArangoTransactionHistoryPassword
: A secret password required for accessing the Database. NB: The single quotes need to be added with your password.
ArangoEvaluationURL
: Endpoint for the ArangoDB Evalation Database.
ArangoEvaluationPassword
: A secret password required for accessing the Database. NB: The single quotes need to be added with your password.
Branch
: The specific branch in source control that the deployment should target.
CacheEnabled
: A flag to enable or disable caching.
CacheTTL
: Time-to-live for the cache in seconds.
ELASTIC_HOST
: The hostname for the Elasticsearch service.
ELASTIC_USERNAME
: The username for accessing Elasticsearch.
ELASTIC_PASSWORD
: The password for accessing Elasticsearch.
ELASTIC_SEARCH_VERSION
: The version of Elasticsearch in use.
EnableQuoting
: A flag to enable or disable quoting functionality adding Pain messages in the chain.
envName
: The environment name for the deployment.
FLUSHBYTES
: The byte threshold for flushing data.
ImageRepository
: The repository for Docker images.
LogLevel
: The verbosity level of logging. NB: The single quotes need to be added in.
MaxCPU
: The maximum CPU resource limit.
NATS_SERVER_TYPE
: The type of NATS server in use.
NATS_SERVER_URL
: The URL for the NATS server.
RedisCluster
: A flag to indicate if Redis is running in cluster mode.
RedisPassword
: The password for accessing Redis.
RedisServers
: The hostname for the Redis Cluster service. NB: The single quotes need to be added in to the host string.
Repository
: This parameter specifies the name of a repository
jobs.zip
file, which contains job configuration files that you need to add to your Jenkins instance.jobs.zip
file and where you unpack it.cd <path to configuration>
<path to configuration>
is a placeholder for the actual directory path where your jobs.zip
unzipped files are located.eg: cd "C:\Documents\tasks\Jenkins\jobs"
kubectl cp
command to copy the job configurations from your local machine to the Jenkins pod running in your Kubernetes cluster.kubectl cp . <name of pod>:/var/jenkins_home/jobs/ -n cicd
<name of pod>
is a placeholder for the actual name of your Jenkins pod. You need to replace it with the correct pod name which you can find by running kubectl get pods -n cicd
.-n cicd
specifies the namespace where your Jenkins is deployed, which in this case is cicd
./var/jenkins_home/jobs/
directory.kubectl rollout restart deployment <jenkins-deployment-name> -n cicd
<jenkins-deployment-name>
with the actual deployment name of your Jenkins instance.eg: http://localhost:52933/safeRestart
The process involves configuring Jenkins to deploy various processors into the Tazama cluster. These processors are essential components of the system and require specific configurations, such as database connections and service endpoints, to function correctly.
Dashboard → Deployments→ ArangoDB
Run the Create Arango Setup
and then Populate Arango Configuration
jobs to populate the ArangoDB with the correct configuration required by the system. This job would utilize the variables set in the global configuration to connect to ArangoDB and perform the necessary setup.
After importing the Jenkins jobs, you need to configure each job with the appropriate credentials and Kubernetes server endpoint details. This setup is crucial to ensure that each job has the necessary permissions and access to interact with other services and the Kubernetes cluster.
configure
to be able to edit$Repository
for rule-processors to be tazama-lf
. It's okay to hardcode this.NOTE- The Kubernetes server endpoint can be copied from your .kubeconfig file under cluster -> server
Token for authenticating Jenkins with Kubernetes services
incase you used the credentials ID described in this doc.By completing these steps, you ensure that each Jenkins job can access the necessary repositories and services with the correct permissions and interact with your Kubernetes cluster using the right endpoints and credentials. It's essential to review and verify these settings regularly, especially after any changes to the credentials or infrastructure.
Dashboard → Deployments→ Pipelines→ Deploying All Rules and Rule Processors
Run the Jenkins jobs that deploy the processors to the Tazama cluster. These jobs will reference the global environment variables you've configured, ensuring that each processor has the required connections and configurations.
Run the Deploying All Rules and Rule Processors Pipeline Job
Dashboard → Testing→ E2E Test
The "E2E Test" job in Jenkins is an essential component for ensuring the integrity and robustness of the platform. It is specifically designed to perform comprehensive end-to-end testing, replicating user behaviors and interactions to verify that every facet of the platform functions together seamlessly.
To resolve this issue, you would need to:
tlscomsecret
secret contains the necessary TLS certificates and keys.tlscomsecret
to the development
namespace, if it's not already present.To address the network access error encountered when deploying containers that require communication with arango.development.svc
, follow these steps:
Verify that the network policies and service discovery configurations are correctly set up within your cluster to allow connectivity to arango.development.svc
.
If your deployment is within a Kubernetes environment and you're using network namespaces, consider enabling the Host Network option. This grants the pod access to the host machine's network stack, which can be necessary if the service is only resolvable or accessible in the host's network:
arango.development.svc
is only available on the host's network.Implementing these steps should help in resolving connectivity issues related to the arango.development.svc
hostname not being found, facilitating successful POST requests to the specified endpoints.
If you are experiencing problems with your Kubernetes pods that may be related to environmental variables or configuration issues, such as frequent restarts or failed connections to services like ArangoDB, follow these steps to troubleshoot and resolve the issue:
kubectl get pods
command. Take note of any pods that are in a CrashLoopBackOff state or that are frequently restarting.kubectl describe pod <pod-name>
to get more details about the pod's state and events that might indicate what is causing the restarts.kubectl logs <pod-name>
to look for any error messages or stack traces that could point to a configuration problem or a missing environment variable.By carefully checking your Jenkins environment variables and ensuring the ArangoDB configuration is correct, you can resolve issues leading to pod instability and ensure that your services run smoothly in the Kubernetes environment.
When encountering authentication errors during a Jenkins build process that involve Kubernetes plugin issues or Docker image push failures, follow these troubleshooting steps:
NullPointerException
, which is often due to missing or improperly configured credentials within Jenkins. This could be an issue with the Kubernetes plugin configuration where a required value is not being set, resulting in a null being passed where an object is expected.By following these steps, you can address the authentication issues that are causing the Jenkins build process to fail, ensuring a successful connection to Kubernetes and Docker registry services.
If for some reason the jenkins agent starts up on your kubernetes instance and then termnates and restarts. You might need to change to frmpullsecret with namespacecicd
to .dockerconfigjson data instead of the AWS data.
Or run this command to retrieve your token:
aws ecr get-login-password --region <--Region of the ECR>
The above comand will print out a token copy that token which is used as your password.
Docker Config JSON: Understanding the auth
Field
The auth
field in the .dockerconfigjson
file is a base64-encoded string that combines your Docker registry username and password in the format username:password
. Here's how you can construct it:
Steps to Construct the auth
Field
Combine the Username and Password
Format the string as username:password
. For example, your username is AWS
and your password is yourpassword
.
Base64 Encode the String
You can use a command-line tool like base64
or an online base64 encoder to encode the string.
Using a command-line tool:
echo -n 'AWS:yourpassword' | base64
This will produce a base64-encoded string, which you then place in the auth field.
Here is an example of what the .dockerconfigjson data in the secret file might look like after encoding:
{"auths":{"registory":{"username":"AWS","password":"token","email":"no@email.local","auth":"QVdTOnlvdXJwYXNzd29yZA=="}}}
Please see the example below:
apiVersion: v1
kind: Secret
metadata:
name: frmpullsecret
namespace: cicd
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: >-
With the Helm charts and Jenkins jobs successfully executed, your Tazama (Real-time Monitoring System) should now be operational within your Kubernetes cluster. This comprehensive setup leverages the robust capabilities of Kubernetes orchestrated by Jenkins automation to ensure a seamless deployment process.
As you navigate through the use and potential customization of the Tazama system, keep in mind the importance of maintaining the configurations as documented in this guide. Regularly update your environment variables, manage your credentials securely, and ensure that the pipeline scripts are kept up-to-date with any changes in your infrastructure or workflows.
Should you encounter any issues or have questions regarding the installation and configuration of the Tazama system, support is readily available. You can reach out via email or join the dedicated Slack workspace for collaborative troubleshooting and community support.
For direct assistance:
Joining the FRMS CoE workspace on Slack will connect you with a community of experts and peers who can offer insights and help you leverage the full potential of your FRMS system. Always ensure that you are working within secure communication channels and handling sensitive information with care.