Open christopherkeller opened 5 years ago
Connector uploaded into S3 bucket, but need IAM work on each EC2 instance to make it seamless. listed are the manual steps after uploading into S3.
sudo yum install -y unzip
sudo unzip /tmp/confluentinc-kafka-connect-elasticsearch-5.3.1.zip -d /usr/share/java
sudo chown -R kafka:kafka /usr/share/java/confluentinc-kafka-connect-elasticsearch-5.3.1
sudo systemctl restart apache-kafka-connect
(this seems unecessary as the connectors picked up the connectors instantly)verify connector is running
curl http://`ifconfig eth1 | grep -e 'inet\s' | awk '{ print $2 }'`:8083/connector-plugins
On ansible host provision connectors as usual
AWS_PROFILE=slower ec2_region=us-east-2 ./deploy connect $platform_root/ucsd/inventory/hostsfile-us-east-2-demo.none demo aws us-east-2 development centos $platform_root/ucsd $key_path
confluent supports ES sinks with a library package. we should install this for apache mode by default.
https://www.confluent.io/hub/confluentinc/kafka-connect-elasticsearch