exasol / exasol-testcontainers

Test container for Exasol on Docker
MIT License
9 stars 2 forks source link

Tests were frozen during `mvn package` #6

Closed AnastasiiaSergienko closed 4 years ago

AnastasiiaSergienko commented 4 years ago

Problem

I run the build on the command line mvn package and the process was frozen on the testing stage. It just stopped and nothing happened from more than 15 minutes until I interrupted it by hands.

I think we should have a timeout for the tests. Part of the logs:

2019-11-15 10:41:05.363 [INFO ] STDOUT: FILE_SYNC: Starting partition with shell and command 'if [ -e /exa/etc/EXAConf ]; then chmod --reference=/exa/etc/EXAConf /exa/etc/EXAConf_0.11.1573810864.28 && chown --reference=/exa/etc/EXAConf /exa/etc/EXAConf_0.11.1573810864.28; fi 2>&1' on all nodes. 2019-11-15 10:41:05.363 [INFO ] STDOUT: FILE_SYNC: Moving file '/exa/etc/EXAConf_0.11.1573810864.28' to '/exa/etc/EXAConf'. 2019-11-15 10:41:05.363 [INFO ] STDOUT: FILE_SYNC: Starting partition with shell and command 'mv -f /exa/etc/EXAConf_0.11.1573810864.28 /exa/etc/EXAConf 2>&1' on all nodes. 2019-11-15 10:41:05.363 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Run service slave_unblock 2019-11-15 10:41:05.363 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Run service next_stage 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Current node id is '11' 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Skipping LVM device initialization due to device_type 'file' 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: All nodes are online: ('n11',) 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Current SSL configuration: 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: >>> CA cert: /exa/etc/ssl/ssl.ca 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: >>> server cert: /exa/etc/ssl/ssl.crt 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: >>> server key: /exa/etc/ssl/ssl.key 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Current node is a master node. 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Next stage will be 'stage3'. 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: sshd was started with PID 4 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: SSH keys for following users was generated: 0, 500 2019-11-15 10:41:05.364 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: rsyslogd was started with PID 5 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: crond was started with PID 6 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: None was started with PID 7 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Merged the EXAConf of 1 nodes. 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage2: Next stage will be 'stage3'. 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Output redirected to '/exa/logs/cored/exainit.log' 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run stage 'stage3' 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Ignore service environment_conf 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Ignore service rc_local 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Ignore service hugepages 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service node_options 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service prepare_update 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service sshd 2019-11-15 10:41:05.365 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service storaged_conf 2019-11-15 10:41:05.366 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service databases_conf 2019-11-15 10:41:05.366 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service wait_for_nodes 2019-11-15 10:41:05.366 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service bucketfs 2019-11-15 10:41:05.366 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service storaged_upgrade 2019-11-15 10:41:05.366 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service next_stage_for_slave 2019-11-15 10:41:05.366 [INFO ] STDOUT: Waiting for 0 slave nodes to reach the barrier (13001) 2019-11-15 10:41:05.366 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service logd 2019-11-15 10:41:05.374 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service lockd 2019-11-15 10:41:05.374 [INFO ] STDOUT: [2019-11-15 10:41:04] stage3: Run service dwad 2019-11-15 10:41:05.374 [INFO ] STDOUT: [2019-11-15 10:41:04] Started /bin/sh with PID:410 UID:0 GID:0 Part:10 Node:0 2019-11-15 10:41:05.375 [INFO ] STDOUT: [2019-11-15 10:41:04] child 410 (Part:10 Node:0 sh) returned with state 0. 2019-11-15 10:41:05.375 [INFO ] STDOUT: [2019-11-15 10:41:04] Started /bin/sh with PID:413 UID:0 GID:0 Part:11 Node:0 2019-11-15 10:41:05.375 [INFO ] STDOUT: [2019-11-15 10:41:04] child 413 (Part:11 Node:0 sh) returned with state 0. 2019-11-15 10:41:05.375 [INFO ] STDOUT: [2019-11-15 10:41:04] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/libexec/bucketfsd with PID:430 UID:0 GID:0 Part:12 Node:0 2019-11-15 10:41:05.375 [INFO ] STDOUT: [2019-11-15 10:41:04] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/libexec/logd with PID:431 UID:0 GID:0 Part:13 Node:0 2019-11-15 10:41:05.376 [INFO ] STDOUT: [2019-11-15 10:41:04] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/libexec/lockd with PID:432 UID:0 GID:0 Part:14 Node:0 2019-11-15 10:41:05.377 [INFO ] STDOUT: [2019-11-15 10:41:05] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/libexec/dwad with PID:436 UID:0 GID:0 Part:15 Node:0 2019-11-15 10:41:07.364 [INFO ] STDOUT: [2019-11-15 10:41:07] stage3: Run service storaged 2019-11-15 10:41:07.365 [INFO ] STDOUT: [2019-11-15 10:41:07] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/bin/csctrl with PID:479 UID:0 GID:0 Part:16 Node:0 2019-11-15 10:41:07.366 [INFO ] STDOUT: [2019-11-15 10:41:07] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/libexec/cos_storage with PID:486 UID:0 GID:0 Part:17 Node:0 2019-11-15 10:41:07.367 [INFO ] STDOUT: [2019-11-15 10:41:07] child 479 (Part:16 Node:0 csctrl) returned with state 0. 2019-11-15 10:41:07.368 [INFO ] STDOUT: [2019-11-15 10:41:07] Force new cluster synchronization upon user request from partition 17. 2019-11-15 10:41:07.368 [INFO ] STDOUT: [2019-11-15 10:41:07] Force new cluster configuration upon user request on node n11. 2019-11-15 10:41:08.364 [INFO ] STDOUT: [2019-11-15 10:41:07] Received transitional Config Change Message: rid=4 seq=74 conf=7 < ip://n11:10001> 2019-11-15 10:41:08.366 [INFO ] STDOUT: [2019-11-15 10:41:07] Received regular Config Change Message: rid=8 seq=0 conf=7 < ip://n11:10001> 2019-11-15 10:41:08.367 [INFO ] STDOUT: [2019-11-15 10:41:07] Self information: NID = 11, address = ip://n11:10001; event_id: 43; no. of cluster nodes: 1 2019-11-15 10:41:08.368 [INFO ] STDOUT: [2019-11-15 10:41:07] 11: ip://n11:10001 (available) usage: 9 2019-11-15 10:41:08.368 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 4 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.369 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 5 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.370 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 6 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.370 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 7 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.371 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 12 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.372 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 13 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.373 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 14 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.373 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 15 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.375 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 17 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.376 [INFO ] STDOUT: [2019-11-15 10:41:07] Registered local processes: 2019-11-15 10:41:08.379 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 204 (PID: 0, NID: 11) 2019-11-15 10:41:08.380 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 388 (PID: 4, NID: 0) 2019-11-15 10:41:08.381 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 390 (PID: 5, NID: 0) 2019-11-15 10:41:08.381 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 391 (PID: 6, NID: 0) 2019-11-15 10:41:08.382 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 395 (PID: 7, NID: 0) 2019-11-15 10:41:08.382 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 430 (PID: 12, NID: 0) 2019-11-15 10:41:08.383 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 431 (PID: 13, NID: 0) 2019-11-15 10:41:08.383 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 432 (PID: 14, NID: 0) 2019-11-15 10:41:08.384 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 436 (PID: 15, NID: 0) 2019-11-15 10:41:08.384 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 486 (PID: 17, NID: 0) 2019-11-15 10:41:08.385 [INFO ] STDOUT: [2019-11-15 10:41:07] Active Nodes: 1 - ip://n11:10001 2019-11-15 10:41:08.385 [INFO ] STDOUT: [2019-11-15 10:41:07] Nodes of last Configuration: 1 - ip://n11:10001 2019-11-15 10:41:08.385 [INFO ] STDOUT: [2019-11-15 10:41:07] Config Change completed. 2019-11-15 10:41:08.386 [INFO ] STDOUT: [2019-11-15 10:41:07] Self information: NID = 11, address = ip://n11:10001; event_id: 54; no. of cluster nodes: 1 2019-11-15 10:41:08.387 [INFO ] STDOUT: [2019-11-15 10:41:07] 11: ip://n11:10001 (available) usage: 9 2019-11-15 10:41:08.387 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 4 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.387 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 5 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.388 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 6 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.388 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 7 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.388 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 12 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.391 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 13 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.391 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 14 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.392 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 15 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.392 [INFO ] STDOUT: [2019-11-15 10:41:07] Partition 17 contains 1 nodes: [ 11 ] 2019-11-15 10:41:08.392 [INFO ] STDOUT: [2019-11-15 10:41:07] Registered local processes: 2019-11-15 10:41:08.393 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 204 (PID: 0, NID: 11) 2019-11-15 10:41:08.393 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 388 (PID: 4, NID: 0) 2019-11-15 10:41:08.394 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 390 (PID: 5, NID: 0) 2019-11-15 10:41:08.394 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 391 (PID: 6, NID: 0) 2019-11-15 10:41:08.395 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 395 (PID: 7, NID: 0) 2019-11-15 10:41:08.395 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 430 (PID: 12, NID: 0) 2019-11-15 10:41:08.398 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 431 (PID: 13, NID: 0) 2019-11-15 10:41:08.404 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 432 (PID: 14, NID: 0) 2019-11-15 10:41:08.405 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 436 (PID: 15, NID: 0) 2019-11-15 10:41:08.406 [INFO ] STDOUT: [2019-11-15 10:41:07] PID: 486 (PID: 17, NID: 0) 2019-11-15 10:41:10.363 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Run service confd 2019-11-15 10:41:11.363 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Run service slave_unblock 2019-11-15 10:41:11.364 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Run service next_stage 2019-11-15 10:41:11.364 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Current node id is '11' 2019-11-15 10:41:11.364 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: sshd was not started: ERROR 02310: [cosif_resource] binary is already executed in another partition 2019-11-15 10:41:11.364 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: storaged config file genarated here: '/exa/etc/cos_storage.conf' 2019-11-15 10:41:11.364 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Database configuration: nsexec(0106775), /usr/opt/mountjail, /exa/logs/db/DB1 2019-11-15 10:41:11.364 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: All nodes are online: ('n11',) 2019-11-15 10:41:11.364 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: bucketfs was not started 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Current node is a master node. 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Next stage will be 'stage4'. 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: logd was started with PID 13 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: lockd was started with PID 14 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: dwad was started with PID 15 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: storaged was started with PID 17 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: EXAStorage devices added: /exa/data/storage/dev.1 2019-11-15 10:41:11.365 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: confd was started with PID 18 2019-11-15 10:41:11.366 [INFO ] STDOUT: [2019-11-15 10:41:10] stage3: Next stage will be 'stage4'. 2019-11-15 10:41:11.366 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Output redirected to '/exa/logs/cored/exainit.log' 2019-11-15 10:41:11.366 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Run stage 'stage4' 2019-11-15 10:41:11.366 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Ignore service environment_conf 2019-11-15 10:41:11.366 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Ignore service rc_local 2019-11-15 10:41:11.368 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Run service node_options 2019-11-15 10:41:11.368 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Run service prepare_update 2019-11-15 10:41:11.368 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Run service wait_for_nodes 2019-11-15 10:41:11.368 [INFO ] STDOUT: [2019-11-15 10:41:10] stage4: Run service storaged_ephemeral 2019-11-15 10:41:11.368 [INFO ] STDOUT: [2019-11-15 10:41:10] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/libexec/confd with PID:522 UID:0 GID:0 Part:18 Node:0 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:12] stage4: Run service next_stage_for_slave 2019-11-15 10:41:13.364 [INFO ] STDOUT: Waiting for 0 slave nodes to reach the barrier (13002) 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:12] stage4: Run service storage_volumes 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:12] stage4: Run service databases 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: Run service slave_unblock 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: Run service next_stage 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: Current node id is '11' 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: All nodes are online: ('n11',) 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: No ephemeral disks configured. 2019-11-15 10:41:13.364 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: Current node is a master node. 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: Next stage will be 'stage5'. 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: Volumes created: DataVolume1:0 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: The following databases were created: DB1 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage4: Next stage will be 'stage5'. 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Output redirected to '/exa/logs/cored/exainit.log' 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Run stage 'stage5' 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Ignore service environment_conf 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Ignore service rc_local 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Run service node_options 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Run service prepare_update 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Run service wait_for_nodes 2019-11-15 10:41:13.365 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: Current node id is '11' 2019-11-15 10:41:13.366 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: All nodes are online: ('n11',) 2019-11-15 10:41:13.366 [INFO ] STDOUT: [2019-11-15 10:41:13] stage5: All stages finished. 2019-11-15 10:41:13.366 [INFO ] STDOUT: [2019-11-15 10:41:13] Started /usr/opt/EXASuite-6/EXAClusterOS-6.2.2/bin/dwa_wrapper with PID:572 UID:500 GID:500 Part:19 Node:0 2019-11-15 10:41:13.366 [INFO ] STDOUT: [2019-11-15 10:41:13] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/pddserver with PID:609 UID:500 GID:500 Part:20 Node:0 2019-11-15 10:41:14.364 [INFO ] STDOUT: [2019-11-15 10:41:13] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/objectserver with PID:703 UID:500 GID:500 Part:21 Node:0 2019-11-15 10:41:14.364 [INFO ] STDOUT: [2019-11-15 10:41:14] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/exasqlinit with PID:730 UID:500 GID:500 Part:22 Node:0 2019-11-15 10:41:14.364 [INFO ] STDOUT: [2019-11-15 10:41:14] root child 204 (exainit.py) returned with state 0. 2019-11-15 10:41:18.365 [INFO ] STDOUT: [2019-11-15 10:41:17] child 730 (Part:22 Node:0 exasqlinit) returned with state 0. 2019-11-15 10:41:18.365 [INFO ] STDOUT: [2019-11-15 10:41:17] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/exasqllog with PID:753 UID:500 GID:500 Part:23 Node:0 2019-11-15 10:41:29.366 [INFO ] STDOUT: [2019-11-15 10:41:28] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/loaderd with PID:785 UID:500 GID:500 Part:24 Node:0 2019-11-15 10:41:29.366 [INFO ] STDOUT: [2019-11-15 10:41:29] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/exaetl with PID:806 UID:500 GID:500 Part:25 Node:0 2019-11-15 10:41:29.366 [INFO ] STDOUT: [2019-11-15 10:41:29] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/exacs with PID:807 UID:500 GID:500 Part:26 Node:0 2019-11-15 10:41:30.366 [INFO ] STDOUT: [2019-11-15 10:41:29] Started /usr/opt/EXASuite-6/EXASolution-6.2.2/bin/exasql with PID:850 UID:500 GID:500 Part:27 Node:0

redcatbear commented 4 years ago

Fixed with #24 and #26.