Open codefromthecrypt opened 7 years ago
Setting up everything is actually not that easy :) you need spark, elasticsearch, kafka (which has zookeeper), zipkin, and then.. this job!
Here's what I'm doing now..
rm -rf kafka_*/logs /tmp/kafka-logs/ /tmp/zookeeper/ nohup bash -c "cd kafka_* && bin/zookeeper-server-start.sh config/zookeeper.properties >/dev/null 2>&1 &" nohup bash -c "cd kafka_* && bin/kafka-server-start.sh config/server.properties >/dev/null 2>&1 &" kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic zipkin
rm -rf spark-1.6.3-bin-hadoop2.6/work spark-1.6.3-bin-hadoop2.6/logs (cd spark-1.6.3-bin-hadoop2.6 && ./sbin/start-master.sh) (cd spark-1.6.3-bin-hadoop2.6 && ./sbin/start-slave.sh spark://127.0.0.1:7077)
rm -r elasticsearch-5.0.0/data/ elasticsearch-5.0.0/bin/elasticsearch
STORAGE_TYPE=elasticsearch ES_HOSTS=http://127.0.0.1:9200 java -jar zipkin.jar
java -jar ./sparkstreaming-job/target/zipkin-sparkstreaming-job-*.jar \ --zipkin.storage.type=elasticsearch \ --zipkin.storage.elasticsearch.hosts=http://127.0.0.1:9200 \ --zipkin.sparkstreaming.stream.kafka.bootstrap-servers=127.0.0.1:9092 \ --zipkin.sparkstreaming.master=spark://127.0.0.1:7077
Ex running this or something else
#!/bin/python import time import os data=data="""[{ "traceId": "%x", "name": "fermentum", "id": "%x", "annotations": [ { "timestamp": %i, "value": "sr", "endpoint": { "serviceName": "semper", "ipv4": "113.29.89.129", "port": 2131 } }, { "timestamp": %i, "value": "sagittis", "endpoint": { "serviceName": "semper", "ipv4": "113.29.89.129", "port": 2131 } }, { "timestamp": %i, "value": "montes", "endpoint": { "serviceName": "semper", "ipv4": "113.29.89.129", "port": 2131 } }, { "timestamp": %i, "value": "augue", "endpoint": { "serviceName": "semper", "ipv4": "113.29.89.129", "port": 2131 } }, { "timestamp": %i, "value": "malesuada", "endpoint": { "serviceName": "semper", "ipv4": "113.29.89.129", "port": 2131 } }, { "timestamp": %i, "value": "ss", "endpoint": { "serviceName": "semper", "ipv4": "113.29.89.129", "port": 2131 } } ], "binaryAnnotations": [ { "key": "mollis", "value": "hendrerit", "endpoint": { "serviceName": "semper", "ipv4": "113.29.89.129", "port": 2131 } } ] }]""" def main(): count = 100 data1=data.replace(' ', '').replace('\n', '') cmdp = r'kafka-console-producer.sh --broker-list localhost:9092 --topic zipkin' pipp = os.popen(cmdp, 'w') i = 0 while i < count: i += 1 timestamp = time.time() * 10 ** 6 pipp.write(data1%(i, i, timestamp, timestamp, timestamp, timestamp, timestamp, timestamp) + "\r\n") #print data1%(i, i, timestamp, timestamp, timestamp, timestamp, timestamp, timestamp) print 'finsh!' pipp.close() if __name__ == '__main__': main()
http://localhost:9411
cc @openzipkin/devops-tooling @bsideup in case anyone is interested or has time to help
Setting up everything is actually not that easy :) you need spark, elasticsearch, kafka (which has zookeeper), zipkin, and then.. this job!
Here's what I'm doing now..
Start Kafka
Start Spark
Start Elasticsearch
Start Zipkin
Start the Spark job
Throw some spans into Kafka
Ex running this or something else
Look at zipkin to see if there are recent spans
http://localhost:9411