sahu04 / reactssr

MIT License
0 stars 0 forks source link

s3 bucket #1

Open sahu04 opened 10 months ago

sahu04 commented 10 months ago

Approach

manually Configure

  1. AWS MSK cluster backup into same AWS account

Prerequisites

  1. 1.Aws Account
  2. 2.Install Aws CLI
  3. 3.Require VPC, Subnet, Route table, Security Group
  4. 4.MSK cluster
  5. 5.Ec2 instance & install Java , Kafka
  6. 6.IAM role for client & backup
  7. 7.S3 bucket
  8. 8.MSK plugin
  9. 9.MSK connector
  10. 10.VPC Endpoint

Process

1.create S3 bucket in main aws account

  1. Create IAM role for S3 bucket & attached policy for access object 3.Create MSK cluster provisioned attached VPC, subnet(different AZ), Security Group broker- kafka.t3.small Kafka version- 2.8.1(recommended) 4.Create Custom Plugin download s3 sink connector-https://www.confluent.io/hub/confluentinc/kafka-connect-s3 and uploaded zip file into s3 bucket MSK connect > custom plugin> s3 bucket 5.Create VPC endpoint service name & type - com.amazonaws.us-east-1.s3 & Gateway 6.create Client Machine(EC2 Instance) with IAM role instance type - t3a.small AMI - Amazon Linux 2 AMI (HVM) - Kernel 5.10, SSD Volume Type connect Ec2 terminal> install Kafka > install AWS CLI >Install Java > create Topic 7.create connector MSK connect> choose connector >existing plugin > add connector configure > IAM role for s3 bucket 8.send backup data to s3 bucket create producer and add some massage S3 bucket - MSK cluster data available in topic folder.

  2. AWS MSK cluster backup into another AWS account. 1.create S3 bucket in account B and attached bucket policy. 2.Account A create IAM role for s3 bucket change trust policy for cross account access.

  3. Restore data From S3 bucket to AWS MSK cluster into same AWS account. 1.downloaded s3 source connector-https://www.confluent.io/hub/confluentinc/kafka-connect-s3-source and uploaded into S3 bucket. 2.create Custom plugin MSK connect > custom plugin> choose s3 bucket 3.create connector MSK connect> choose connector >existing plugin > add connector configure > IAM role for s3 bucket Note : connector getting fails

sahu04 commented 6 months ago

Description: SonarQube is a static code analysis tool that helps identify bugs, vulnerabilities, and code smells in software projects. It provides detailed reports and insights to improve code quality and maintainability throughout the development lifecycle.Configure SonarQube in Azure Pipelines for automated code scanning, Install SonarQube, integrate it into your pipeline, and analyze code quality continuously to catch issues early and ensure high-quality code.

Approach:

  1. sonarqube server install in ubuntu VM ware.
  2. Java install
  3. integration sonar to azure devops pipeline project.

Business value:

  1. SonarQube improves code quality by identifying and addressing bugs, security vulnerabilities, and code smells.
  2. It reduces technical debt by catching issues early in the development process, minimizing future maintenance efforts.
  3. Enhanced security is achieved through early detection and remediation of vulnerabilities, reducing the risk of security breaches.
  4. Early issue resolution with SonarQube saves costs that would otherwise be spent on post-release bug fixes.

Acceptance criteria :

  1. SonarQube is successfully integrated into the Azure Pipelines.
  2. Code analysis is performed automatically as part of the pipeline execution.
  3. SonarQube generates detailed reports on code quality, security vulnerabilities, and bugs.
  4. The reports provide actionable insights for developers to address identified issues.
  5. The SonarQube dashboard displays accurate and up-to-date information on code health.
  6. Continuous monitoring and improvement of code quality are ensured through scheduled pipeline executions.