openethereum / parity-deploy

Parity deployment script.
Apache License 2.0
81 stars 52 forks source link

Unable to connect ethstats #40

Closed ThomasBouquet95 closed 6 years ago

ThomasBouquet95 commented 6 years ago

Hi all, I am trying to deploy a network of 3 peers with ethstats. I entered these commands:

./clean.sh
docker-compose down 
./parity-deploy.sh --config aura --nodes 3 --ethstats
docker-compose up -d

Then, when I go to the page "http://localhost:3001/", i don't see my nodes. Is there a problem with ethstats config of parity deploy ?

Thanks !

ddorgan commented 6 years ago

If you check the docker-compose logs and search for dashboard do you see entries like these ones below?

dashboard_1  | 2018-02-19 12:30:02.361 [API] [STA] Stats from: host2
dashboard_1  | 2018-02-19 12:30:02.368 [API] [HIS] Got history from: host2
dashboard_1  | 2018-02-19 12:30:02.443 [API] [STA] Stats from: host3
dashboard_1  | 2018-02-19 12:30:02.446 [API] [STA] Stats from: host1
dashboard_1  | 2018-02-19 12:30:05.369 [API] [HIS] Got history from: host2
ThomasBouquet95 commented 6 years ago

No I don't have thoses.

It seems that there is a problem with my nodes but I didn't modify the config file ...

host4        | Loading config file from /parity/authority.toml
host4        | You might have supplied invalid parameters in config file.
host4        | invalid type: string "all", expected a sequence for key `rpc.cors`
host3        | Loading config file from /parity/authority.toml
host3        | You might have supplied invalid parameters in config file.
host3        | invalid type: string "all", expected a sequence for key `rpc.cors`
host2        | Loading config file from /parity/authority.toml
host2        | You might have supplied invalid parameters in config file.
host2        | invalid type: string "all", expected a sequence for key `rpc.cors`
host4 exited with code 2
ddorgan commented 6 years ago

What's the parity version on the local system? Also can you move the deployment directory away (or delete it) and run the parity-deploy.sh command again and see if docker-compose up works.

ThomasBouquet95 commented 6 years ago

My parity version : Parity/v1.9.2-beta-0feb0bb-20180201/x86_64-linux-gnu/rustc1.23.0 I removed the file and wrote :

jassu@jassu-ThinkPad-T431s:~/Documents/test/parity-deploy$ sudo su

root@jassu-ThinkPad-T431s:/home/jassu/Documents/test/parity-deploy# ./parity-deploy.sh --config aura --nodes 3 --ethstats
Requirement already satisfied: docker-compose in /usr/lib/python2.7/dist-packages
Requirement already satisfied: ipaddress>=1.0.16 in /usr/lib/python2.7/dist-packages (from docker-compose)
Requirement already satisfied: enum34<2,>=1.0.4 in /usr/lib/python2.7/dist-packages (from docker-compose)
Requirement already satisfied: backports.ssl_match_hostname>=3.5 in /usr/lib/python2.7/dist-packages (from docker-compose)
read EC key
read EC key
read EC key

Then docker-compose up -d Here are my logs :

monitor_1    | 
monitor_1    |                         -------------
monitor_1    | 
monitor_1    |                       PM2 process manager
monitor_1    | 
monitor_1    | __/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
monitor_1    |  _\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
monitor_1    |   _\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
monitor_1    |    _\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
monitor_1    |     _\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
monitor_1    |      _\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
monitor_1    |       _\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
monitor_1    |        _\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
monitor_1    |         _\///______________\///______________\///__\///////////////__
host3        | Loading config file from /parity/authority.toml
monitor_1    | 
host3        | You might have supplied invalid parameters in config file.
monitor_1    | 
host2        | Loading config file from /parity/authority.toml
host3        | invalid type: string "all", expected a sequence for key `rpc.cors`
monitor_1    |                        Getting started
dashboard_1  | npm info it worked if it ends with ok
host2        | You might have supplied invalid parameters in config file.
host3        | Loading config file from /parity/authority.toml
host1        | Loading config file from /parity/authority.toml
monitor_1    | 
dashboard_1  | npm info using npm@4.1.2
host2        | invalid type: string "all", expected a sequence for key `rpc.cors`
host3        | You might have supplied invalid parameters in config file.
host1        | You might have supplied invalid parameters in config file.
monitor_1    |                         Documentation
dashboard_1  | npm info using node@v7.7.3
host2        | Loading config file from /parity/authority.toml
host2        | You might have supplied invalid parameters in config file.
host1        | invalid type: string "all", expected a sequence for key `rpc.cors`
monitor_1    |                         http://pm2.io/
monitor_1    | 
host2        | invalid type: string "all", expected a sequence for key `rpc.cors`
host3        | invalid type: string "all", expected a sequence for key `rpc.cors`
monitor_1    |                       Start PM2 at boot
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~prestart: eth-netstats@0.0.9
host1 exited with code 2
monitor_1    |                         $ pm2 startup
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~start: eth-netstats@0.0.9
monitor_1    | 
dashboard_1  | 
monitor_1    |                      Daemonize Application
dashboard_1  | > eth-netstats@0.0.9 start /eth-netstats
monitor_1    |                        $ pm2 start <app>
dashboard_1  | > node ./bin/www
monitor_1    | 
dashboard_1  | 
dashboard_1  | npm info it worked if it ends with ok
monitor_1    |                      Monitoring/APM solution
dashboard_1  | npm info using npm@4.1.2
monitor_1    |                     https://app.keymetrics.io/
host2 exited with code 2
dashboard_1  | npm info using node@v7.7.3
monitor_1    | 
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~prestart: eth-netstats@0.0.9
monitor_1    |                         -------------
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~start: eth-netstats@0.0.9
monitor_1    | 
dashboard_1  | 
dashboard_1  | > eth-netstats@0.0.9 start /eth-netstats
dashboard_1  | > node ./bin/www
dashboard_1  | 
monitor_1    | [PM2] Spawning PM2 daemon with pm2_home=/home/ethnetintel/.pm2
host3 exited with code 2
monitor_1    | [PM2] PM2 Successfully daemonized
monitor_1    | [PM2][WARN] Applications host1, host2, host3 not running, starting...
monitor_1    | [PM2] App [host1] launched (1 instances)
monitor_1    | [PM2] App [host2] launched (1 instances)
monitor_1    | [PM2] App [host3] launched (1 instances)
monitor_1    | ┌──────────┬────┬──────┬─────┬────────┬─────────┬────────┬─────┬───────────┬──────────┐
monitor_1    | │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem       │ watching │
monitor_1    | ├──────────┼────┼──────┼─────┼────────┼─────────┼────────┼─────┼───────────┼──────────┤
monitor_1    | │ host1    │ 0  │ fork │ 16  │ online │ 0       │ 0s     │ 76% │ 16.3 MB   │ enabled  │
monitor_1    | │ host2    │ 1  │ fork │ 22  │ online │ 0       │ 0s     │ 84% │ 20.1 MB   │ enabled  │
monitor_1    | │ host3    │ 2  │ fork │ 28  │ online │ 0       │ 0s     │ 56% │ 16.1 MB   │ enabled  │
monitor_1    | └──────────┴────┴──────┴─────┴────────┴─────────┴────────┴─────┴───────────┴──────────┘
monitor_1    |  Use `pm2 show <id|name>` to get more details about an app
monitor_1    | [PM2] Spawning PM2 daemon with pm2_home=/home/ethnetintel/.pm2
monitor_1    | [PM2] PM2 Successfully daemonized
monitor_1    | [PM2][WARN] Applications host1, host2, host3 not running, starting...
monitor_1    | [PM2] App [host1] launched (1 instances)
monitor_1    | [PM2] App [host2] launched (1 instances)
monitor_1    | [PM2] App [host3] launched (1 instances)
monitor_1    | ┌──────────┬────┬──────┬─────┬────────┬─────────┬────────┬─────┬───────────┬──────────┐
monitor_1    | │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem       │ watching │
monitor_1    | ├──────────┼────┼──────┼─────┼────────┼─────────┼────────┼─────┼───────────┼──────────┤
monitor_1    | │ host1    │ 0  │ fork │ 17  │ online │ 0       │ 0s     │ 59% │ 17.5 MB   │ enabled  │
monitor_1    | │ host2    │ 1  │ fork │ 20  │ online │ 0       │ 0s     │ 91% │ 21.9 MB   │ enabled  │
monitor_1    | │ host3    │ 2  │ fork │ 27  │ online │ 0       │ 0s     │ 50% │ 16.8 MB   │ enabled  │
monitor_1    | └──────────┴────┴──────┴─────┴────────┴─────────┴────────┴─────┴───────────┴──────────┘
monitor_1    |  Use `pm2 show <id|name>` to get more details about an app
ddorgan commented 6 years ago

Just to check if it is related to this: https://github.com/paritytech/parity/issues/7925 ... Could you try editing the docker-compose.yml and change the image from parity/parity:beta to parity/parity:stable and see if this helps.

ThomasBouquet95 commented 6 years ago

It does not solve the problem ...

By the way, I really thank you for your help ! I don't know what I would do alone ...


Attaching to host2, host1, paritydeploy_dashboard_1, host3, paritydeploy_monitor_1
host1        | Loading config file from /parity/authority.toml
host1        | Invalid node address format given for a boot node: enode://4d6909f1076beb9fdd6cdbda75486fa36eee00a82a00b7ce1b97a2f730b3069d782708a22f3e08d060a92d0cb227ab2f36d155621351671ca7fc6381e0f2bb2b@host3:30303
host1 exited with code 1
dashboard_1  | npm info it worked if it ends with ok
dashboard_1  | npm info using npm@4.1.2
dashboard_1  | npm info using node@v7.7.3
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~prestart: eth-netstats@0.0.9
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~start: eth-netstats@0.0.9
dashboard_1  | 
dashboard_1  | > eth-netstats@0.0.9 start /eth-netstats
dashboard_1  | > node ./bin/www
dashboard_1  | 
host2        | Loading config file from /parity/authority.toml
host2        | You might have supplied invalid parameters in config file.
host2        | invalid type: string "all", expected a sequence for key `rpc.cors`
monitor_1    | 
monitor_1    |                         -------------
monitor_1    | 
monitor_1    |                       PM2 process manager
monitor_1    | 
monitor_1    | __/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
monitor_1    |  _\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
host2 exited with code 2
monitor_1    |   _\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
monitor_1    |    _\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
host3        | Loading config file from /parity/authority.toml
monitor_1    |     _\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
host3        | You might have supplied invalid parameters in config file.
monitor_1    |      _\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
host3        | invalid type: string "all", expected a sequence for key `rpc.cors`
monitor_1    |       _\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
monitor_1    |        _\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
monitor_1    |         _\///______________\///______________\///__\///////////////__
monitor_1    | 
monitor_1    | 
monitor_1    |                        Getting started
monitor_1    | 
monitor_1    |                         Documentation
monitor_1    |                         http://pm2.io/
monitor_1    | 
monitor_1    |                       Start PM2 at boot
monitor_1    |                         $ pm2 startup
monitor_1    | 
monitor_1    |                      Daemonize Application
monitor_1    |                        $ pm2 start <app>
monitor_1    | 
monitor_1    |                      Monitoring/APM solution
monitor_1    |                     https://app.keymetrics.io/
monitor_1    | 
monitor_1    |                         -------------
monitor_1    | 
monitor_1    | [PM2] Spawning PM2 daemon with pm2_home=/home/ethnetintel/.pm2
monitor_1    | [PM2] PM2 Successfully daemonized
monitor_1    | [PM2][WARN] Applications host1, host2, host3 not running, starting...
monitor_1    | [PM2] App [host1] launched (1 instances)
monitor_1    | [PM2] App [host2] launched (1 instances)
monitor_1    | [PM2] App [host3] launched (1 instances)
monitor_1    | ┌──────────┬────┬──────┬─────┬────────┬─────────┬────────┬─────┬───────────┬──────────┐
monitor_1    | │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem       │ watching │
monitor_1    | ├──────────┼────┼──────┼─────┼────────┼─────────┼────────┼─────┼───────────┼──────────┤
monitor_1    | │ host1    │ 0  │ fork │ 19  │ online │ 0       │ 0s     │ 59% │ 15.8 MB   │ enabled  │
monitor_1    | │ host2    │ 1  │ fork │ 23  │ online │ 0       │ 0s     │ 65% │ 17.5 MB   │ enabled  │
monitor_1    | │ host3    │ 2  │ fork │ 32  │ online │ 0       │ 0s     │ 40% │ 14.9 MB   │ enabled  │
monitor_1    | └──────────┴────┴──────┴─────┴────────┴─────────┴────────┴─────┴───────────┴──────────┘
monitor_1    |  Use `pm2 show <id|name>` to get more details about an app
host3 exited with code 2
ddorgan commented 6 years ago

No problem at all! I think this should be resolved in the next beta release.

Can you edit the files deployment/[1][2][3]/authority.toml and change the line:

cors = "all" to cors = ["all"]

Or just change it at the source (config/spec/authority_round.toml) and run ./clean.sh and then your parity-deploy.sh command again.

ThomasBouquet95 commented 6 years ago

It solves the problem for node2 & 3, not for node 1.

Attaching to paritydeploy_dashboard_1, host1, host3, host2, paritydeploy_monitor_1
dashboard_1  | npm info it worked if it ends with ok
dashboard_1  | npm info using npm@4.1.2
host3        | Loading config file from /parity/authority.toml
dashboard_1  | npm info using node@v7.7.3
host2        | Loading config file from /parity/authority.toml
monitor_1    | 
host1        | Loading config file from /parity/authority.toml
host3        | 2018-02-19 13:33:06 UTC Starting Parity/v1.9.2-beta-b60511e-20180214/x86_64-linux-gnu/rustc1.23.0
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~prestart: eth-netstats@0.0.9
host2        | 2018-02-19 13:33:06 UTC Starting Parity/v1.9.2-beta-b60511e-20180214/x86_64-linux-gnu/rustc1.23.0
monitor_1    |                         -------------
host1        | You might have supplied invalid parameters in config file.
host3        | 2018-02-19 13:33:06 UTC Keys path /parity/data/keys/parity
dashboard_1  | npm info lifecycle eth-netstats@0.0.9~start: eth-netstats@0.0.9
host2        | 2018-02-19 13:33:06 UTC Keys path /parity/data/keys/parity
monitor_1    | 
host1        | invalid type: sequence, expected a string for key `rpc.cors`
host3        | 2018-02-19 13:33:06 UTC DB path /parity/data/chains/parity/db/0796728f28a12423
dashboard_1  | 
host2        | 2018-02-19 13:33:06 UTC DB path /parity/data/chains/parity/db/0796728f28a12423
monitor_1    |                       PM2 process manager
host3        | 2018-02-19 13:33:06 UTC Path to dapps /parity/data/dapps
dashboard_1  | > eth-netstats@0.0.9 start /eth-netstats
host2        | 2018-02-19 13:33:06 UTC Path to dapps /parity/data/dapps
monitor_1    | 
host3        | 2018-02-19 13:33:06 UTC State DB configuration: fast
dashboard_1  | > node ./bin/www
host2        | 2018-02-19 13:33:06 UTC State DB configuration: fast
monitor_1    | __/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
host3        | 2018-02-19 13:33:06 UTC Operating mode: active
dashboard_1  | 
host1 exited with code 2
host2        | 2018-02-19 13:33:06 UTC Operating mode: active
monitor_1    |  _\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
host3        | 2018-02-19 13:33:06 UTC Configured for parity using AuthorityRound engine
dashboard_1  | 2018-02-19 13:33:09.530 [API] [CON] Connected host2
host2        | 2018-02-19 13:33:06 UTC Configured for parity using AuthorityRound engine
monitor_1    |   _\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
host3        | 2018-02-19 13:33:12 UTC Public node URL: enode://db3b033de2acf7d4c82ac846d237db1e103b292760326a1629aea1de1e28666856a3a2fa1d201452d83bfad6125fbef395587b6b1617c9bac3f1df532dfef7bc@172.27.0.4:30303
dashboard_1  | 2018-02-19 13:33:09.539 [API] [STA] Stats from: host2
host2        | 2018-02-19 13:33:12 UTC Public node URL: enode://2f5d9c44dee9a6b52c63ab01acd26472d070f95526328aff14779bca46a74196e264477d9058d837b44f02b595902eae0bbbf48afeefbfe0e1a3fae08b7a3b23@172.27.0.3:30303
monitor_1    |    _\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
dashboard_1  | 2018-02-19 13:33:09.546 [API] [STA] Stats from: host2
monitor_1    |     _\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
dashboard_1  | 2018-02-19 13:33:09.728 [API] [CON] Connected host3
monitor_1    |      _\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
dashboard_1  | 2018-02-19 13:33:09.743 [API] [STA] Stats from: host3
monitor_1    |       _\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
dashboard_1  | 2018-02-19 13:33:09.749 [API] [STA] Stats from: host3
monitor_1    |        _\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
dashboard_1  | 2018-02-19 13:33:11.979 [API] [CON] Connection with: M6kHAF_ ended: undefined
monitor_1    |         _\///______________\///______________\///__\///////////////__
dashboard_1  | 2018-02-19 13:33:12.770 [API] [CON] Connected host2
monitor_1    | 
dashboard_1  | 2018-02-19 13:33:12.792 [API] [BLK] Block error: Block undefined
monitor_1    | 
dashboard_1  | 2018-02-19 13:33:12.798 [API] [STA] Stats from: host2
monitor_1    |                        Getting started
monitor_1    | 
monitor_1    |                         Documentation
monitor_1    |                         http://pm2.io/
monitor_1    | 
monitor_1    |                       Start PM2 at boot
monitor_1    |                         $ pm2 startup
monitor_1    | 
monitor_1    |                      Daemonize Application
monitor_1    |                        $ pm2 start <app>
monitor_1    | 
monitor_1    |                      Monitoring/APM solution
monitor_1    |                     https://app.keymetrics.io/
monitor_1    | 
monitor_1    |                         -------------
monitor_1    | 
monitor_1    | [PM2] Spawning PM2 daemon with pm2_home=/home/ethnetintel/.pm2
monitor_1    | [PM2] PM2 Successfully daemonized
monitor_1    | [PM2][WARN] Applications host1, host2, host3 not running, starting...
monitor_1    | [PM2] App [host1] launched (1 instances)
monitor_1    | [PM2] App [host2] launched (1 instances)
monitor_1    | [PM2] App [host3] launched (1 instances)
monitor_1    | ┌──────────┬────┬──────┬─────┬────────┬─────────┬────────┬─────┬───────────┬──────────┐
monitor_1    | │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem       │ watching │
monitor_1    | ├──────────┼────┼──────┼─────┼────────┼─────────┼────────┼─────┼───────────┼──────────┤
monitor_1    | │ host1    │ 0  │ fork │ 16  │ online │ 0       │ 0s     │ 68% │ 15.6 MB   │ enabled  │
monitor_1    | │ host2    │ 1  │ fork │ 22  │ online │ 0       │ 0s     │ 86% │ 18.9 MB   │ enabled  │
monitor_1    | │ host3    │ 2  │ fork │ 28  │ online │ 0       │ 0s     │ 51% │ 15.5 MB   │ enabled  │
monitor_1    | └──────────┴────┴──────┴─────┴────────┴─────────┴────────┴─────┴───────────┴──────────┘
monitor_1    |  Use `pm2 show <id|name>` to get more details about an app
dashboard_1  | 2018-02-19 13:33:19.199 [API] [STA] Stats from: host3
dashboard_1  | 2018-02-19 13:33:22.752 [API] [STA] Stats from: host2
host2        | 2018-02-19 13:33:41 UTC    1/25 peers      8 KiB chain    7 KiB db  0 bytes queue   10 KiB sync  RPC:  0 conn,  3 req/s,  75 µs
host3        | 2018-02-19 13:33:42 UTC    1/25 peers      8 KiB chain    7 KiB db  0 bytes queue   10 KiB sync  RPC:  0 conn,  3 req/s, 102 µs
host2        | 2018-02-19 13:34:11 UTC    1/25 peers      8 KiB chain    7 KiB db  0 bytes queue   10 KiB sync  RPC:  0 conn,  3 req/s, 105 µs
host3        | 2018-02-19 13:34:12 UTC    1/25 peers      8 KiB chain    7 KiB db  0 bytes queue   10 KiB sync  RPC:  0 conn,  3 req/s, 161 µs
ThomasBouquet95 commented 6 years ago

I cleaned it again and start it again (with the modification in config/spec/authority_round.toml) and it works !

However, I still have issues with ethstats, my nodes disconnect constantly (become grey offline) and I don't see the block number increasing (excep sometimes when I reload the page ...). I think that it is a problem with my nodes.

dashboard_1  | 2018-02-19 13:44:28.551 [API] [BLK] Block: 4 from: host3
dashboard_1  | 2018-02-19 13:44:28.575 [API] [BLK] Block: 4 from: host2
dashboard_1  | 2018-02-19 13:44:29.341 [API] [CON] Connected host1
dashboard_1  | 2018-02-19 13:44:29.363 [API] [BLK] Block: 4 from: host1
dashboard_1  | 2018-02-19 13:44:29.370 [API] [STA] Stats from: host1
dashboard_1  | 2018-02-19 13:44:29.390 [API] [HIS] Got history from: host1
dashboard_1  | 2018-02-19 13:44:29.555 [API] [CON] Connection with: M6kJkpz ended: undefined
host2        | 2018-02-19 13:44:30 UTC Imported #5 4090…ee9f (0 txs, 0.00 Mgas, 0.49 ms, 0.56 KiB)
host1        | 2018-02-19 13:44:30 UTC Imported #5 4090…ee9f (0 txs, 0.00 Mgas, 2.45 ms, 0.56 KiB)
host3        | 2018-02-19 13:44:30 UTC Imported #5 4090…ee9f (0 txs, 0.00 Mgas, 1.36 ms, 0.56 KiB)
dashboard_1  | 2018-02-19 13:44:30.332 [API] [BLK] Block: 5 from: host1
dashboard_1  | 2018-02-19 13:44:30.406 [API] [CON] Connected host3
dashboard_1  | 2018-02-19 13:44:30.423 [API] [BLK] Block: 5 from: host3
dashboard_1  | 2018-02-19 13:44:30.430 [API] [STA] Stats from: host3
dashboard_1  | 2018-02-19 13:44:30.445 [API] [HIS] Got history from: host3
dashboard_1  | 2018-02-19 13:44:30.561 [API] [BLK] Block: 5 from: host2
host3        | 2018-02-19 13:44:30 UTC    2/25 peers     11 KiB chain    9 KiB db  0 bytes queue   11 KiB sync  RPC:  0 conn, 11 req/s,  49 µs
host1        | 2018-02-19 13:44:31 UTC    2/25 peers     11 KiB chain    9 KiB db  0 bytes queue   10 KiB sync  RPC:  0 conn, 11 req/s,  51 µs
host2        | 2018-02-19 13:44:31 UTC    2/25 peers     10 KiB chain    9 KiB db  0 bytes queue   10 KiB sync  RPC:  0 conn,  2 req/s,  44 µs
host2        | 2018-02-19 13:44:32 UTC Imported #6 76c1…00ab (0 txs, 0.00 Mgas, 1.02 ms, 0.56 KiB)
host1        | 2018-02-19 13:44:32 UTC Imported #6 76c1…00ab (0 txs, 0.00 Mgas, 0.49 ms, 0.56 KiB)
host3        | 2018-02-19 13:44:32 UTC Imported #6 76c1…00ab (0 txs, 0.00 Mgas, 0.81 ms, 0.56 KiB)
dashboard_1  | 2018-02-19 13:44:32.344 [API] [CON] Connection with: M6kJmEK ended: undefined
dashboard_1  | 2018-02-19 13:44:32.400 [API] [BLK] Block: 6 from: host3
dashboard_1  | 2018-02-19 13:44:32.556 [API] [BLK] Block: 6 from: host2
dashboard_1  | 2018-02-19 13:44:33.340 [API] [CON] Connected host1
dashboard_1  | 2018-02-19 13:44:33.359 [API] [BLK] Block: 6 from: host1
dashboard_1  | 2018-02-19 13:44:33.368 [API] [STA] Stats from: host1
dashboard_1  | 2018-02-19 13:44:33.393 [API] [HIS] Got history from: host1
dashboard_1  | 2018-02-19 13:44:36.059 [API] [CON] Connection with: M6kJlgi ended: undefined
dashboard_1  | 2018-02-19 13:44:36.947 [API] [CON] Connected host2
dashboard_1  | 2018-02-19 13:44:36.963 [API] [BLK] Block: 6 from: host2
dashboard_1  | 2018-02-19 13:44:36.972 [API] [STA] Stats from: host2
dashboard_1  | 2018-02-19 13:44:36.989 [API] [HIS] Got history from: host2
dashboard_1  | 2018-02-19 13:44:39.337 [API] [CON] Connection with: M6kJnCo ended: undefined
dashboard_1  | 2018-02-19 13:44:40.171 [API] [CON] Connected host1
dashboard_1  | 2018-02-19 13:44:40.188 [API] [BLK] Block: 6 from: host1
dashboard_1  | 2018-02-19 13:44:40.198 [API] [STA] Stats from: host1
dashboard_1  | 2018-02-19 13:44:40.223 [API] [HIS] Got history from: host1
dashboard_1  | 2018-02-19 13:44:42.406 [API] [CON] Connection with: M6kJmUv ended: undefined
ddorgan commented 6 years ago

@ThomasBouquet95 Can you try with the latest version of parity. Still an issue?

ddorgan commented 6 years ago

@ThomasBouquet95 any update on this?

MaxXor commented 6 years ago

Hello, I'm having the same issue as @ThomasBouquet95. Currently testing with 3 PoA nodes + ethstats. Ethstats lists all nodes correctly but the last node constantly switches "off" and "on" on the website. I'm using latest parity v1.11.1-

ddorgan commented 6 years ago

@MaxXor can you provide the docker-compose logs please.

Also if you wipe the data directory and start again from scratch does the issue go away?

MaxXor commented 6 years ago

It's basically the same log as shown in the post from @ThomasBouquet95. No, it doesn't go away.