Each machine node runs such an server instance, which operates kunlun storage nodes(SN) and/or computing nodes(CN) that run on the computer server, in response to requests initiated by computing nodes and/or cluster_mgr module to start/stop/create/drop/backup/restore a CN/SN.
http_post_para.cc http_post_file.cc ./http_post_para http://127.0.0.1:9998 "{\"key\":\"value\"}" ./http_post_file http://127.0.0.1:9998 "{\"key\":\"value\"}" ./test.json
it can use curl, but curl can not send paramemter and file together. curl -d "{\"ver\":\"0.1\",\"job_type\":\"get_node\",\"node_type\":\"meta_node\"}" http://127.0.0.1:9998/
get meta_node/storage_node/computer_node from cluster_mgr, with local ip : {"ver":"0.1","job_type":"get_node","node_type":"meta_node","node_ip0":"127.0.0.1","node_ip1":"192.168.207.145"}
it can get all node without any node_type or node_ip : {"ver":"0.1","job_type":"get_node"}
curl -d "{\"ver\":\"0.1\",\"job_type\":\"get_node\",\"node_type\":\"meta_node\"}" http://127.0.0.1:9998/ ./http_post_para http://127.0.0.1:9998 "{\"ver\":\"0.1\",\"job_type\":\"get_node\",\"node_type\":\"meta_node\"}"
curl -d "{\"ver\":\"0.1\",\"job_type\":\"get_info\"}" http://127.0.0.1:9998/ ./http_post_para http://127.0.0.1:9998 "{\"ver\":\"0.1\",\"job_type\":\"get_info\"}"
update node info is notified by cluster_mgr, and node_mgr will get all node from cluster_mgr. {"ver":"0.1","job_type":"update_node"}
delete a table or binlong {"ver":"0.1","job_id":"id123456","job_type":"delete","file_type":"table", "s_ip":"127.0.0.1","s_port":6007,"s_user":"pgx","s_pwd":"pgx_pwd","sdbname":"postgres$$_public","s_table_name":"t13"}
{"ver":"0.1","job_id":"id123456","job_type":"delete","file_type":"binlog", "s_ip":"127.0.0.1","s_port":6007, "s_user":"pgx","s_pwd":"pgx_pwd"}
send a file to another mysql instance {"ver":"0.1","job_id":"id123456","job_type":"send","file_type":"table", "s_ip":"127.0.0.1","s_port":6004,"s_user":"pgx","s_pwd":"pgx_pwd","sdbname":"postgres$$_public","s_table_name":"t12", "d_ip":"127.0.0.1","d_port":6007,"d_user":"pgx","d_pwd":"pgx_pwd","ddbname":"postgres$$_public","d_table_name":"t17"}
send a cmd for node to run in the path of mysql/pgsql/cluster_mgr {"ver":"0.1","job_type":"mysql_cmd","cmd":"./startmysql.sh 6001"} {"ver":"0.1","job_type":"pgsql_cmd","cmd":"python2 start_pg.py port=5401"} {"ver":"0.1","job_type":"cluster_cmd","cmd":"bash -x bin/cluster_mgr_safe --debug --pidfile=run.pid cluster_mgr.cnf"}
curl -d "{\"ver\":\"0.1\",\"job_type\":\"mysql_cmd\",\"cmd\":\"./stopmysql.sh 6007\"}" http://127.0.0.1:9998 curl -d "{\"ver\":\"0.1\",\"job_type\":\"mysql_cmd\",\"cmd\":\"./startmysql.sh 6007\"}" http://127.0.0.1:9998 ./http_post_para http://127.0.0.1:9998 "{\"ver\":\"0.1\",\"job_type\":\"mysql_cmd\",\"cmd\":\"./stopmysql.sh 6007\"}" ./http_post_para http://127.0.0.1:9998 "{\"ver\":\"0.1\",\"job_type\":\"mysql_cmd\",\"cmd\":\"./startmysql.sh 6007\"}"
get job status {"ver":"0.1","job_id":"id123456","job_type":"get_status"}
stop auto pullup with time of minutes, before stop a node, otherwise node_mgr will pull up the node soon. {"ver":"0.1","job_id":"id123456","job_type":"auto_pullup","minutes":10,"port":6001} "minutes":-1 : stop auto pullup always "minutes":0 : start auto pullup "minutes":m : stop auto pullup in m minutes if the json without the item "port", it will done on all of the port.
start cold backup, it will backup mysql and save to hdfs {"ver":"0.1","job_id":"id123456","job_type":"coldbackup","s_ip":"127.0.0.1","s_port":6004,"s_user":"pgx","s_pwd":"pgx_pwd"}
start cold restore, it will restore mysql from hdfs {"ver":"0.1","job_id":"id123456","job_type":"coldrestore","s_ip":"127.0.0.1","s_port":6004,"s_user":"pgx","s_pwd":"pgx_pwd", "file":"/coldbackup/clust1/shard1/coldback_20211122_234303.tgz"}