CityOfZion / neo-scan

Blockchain explorer for NEO
https://neoscan.io
MIT License
70 stars 65 forks source link

Neoscan in Privatenet #406

Closed vncoelho closed 5 years ago

vncoelho commented 5 years ago

Hi @adrienmo, After the removal of notification our neoscan has not been performing perfectly in terms of syncing. It is getting stuck very often, while the neo-clients keep generating blocks normally.

I believe it is part due to some of our configurations. Can you please check if the way we are setting the containers is properly done?

  eco-neo-scan-api-running:
    image: "registry.gitlab.com/cityofzion/neo-scan/api:56268341-master"
    container_name: "eco-neo-scan-api-running"
    ports:
      - "4000:4000"
    #command: /bin/true #disable neo-scan
    environment:
      PORT: 4000
      HOST: localhost
      NEO_SEEDS: "http://eco-neo-csharp-node1-running:30333;http://eco-neo-csharp-noderpc1-running:30334;http://eco-neo-csharp-noderpc1-running:30337"
      DB_HOSTNAME: eco-neo-scan-postgresql-running
      DB_USERNAME: postgres
      DB_PASSWORD: postgres
      DB_DATABASE: neoscan_prodv
      REPLACE_OS_VARS: "true"
    depends_on:
      - eco-neo-scan-postgresql-running
      - eco-neo-scan-sync-running
      - eco-neo-csharp-node1-running
      - eco-neo-csharp-node2-running
      - eco-neo-csharp-noderpc1-running
    entrypoint: bash -c "sleep 10 && /start.sh"
    healthcheck:
      interval: 30s
      retries: 3
      start_period: 20s
      test:
        - CMD
        - bash
        - '-c'
        - exec 6<>/dev/tcp/127.0.0.1/4000
      timeout: 10s
    networks:
      - neo_scan_internal
      - private_net

  eco-neo-scan-sync-running:
    container_name: "eco-neo-scan-sync-running"
    image: "registry.gitlab.com/cityofzion/neo-scan/sync:56268341-master"
    depends_on:
      - eco-neo-scan-postgresql-running
      - eco-neo-csharp-node1-running
      - eco-neo-csharp-node2-running
      - eco-neo-csharp-node3-running
      - eco-neo-csharp-node4-running
      - eco-neo-csharp-noderpc1-running
    environment:
      NEO_SEEDS: "http://eco-neo-csharp-node1-running:30333;http://eco-neo-csharp-noderpc1-running:30334;http://eco-neo-csharp-noderpc1-running:30337"
      DB_HOSTNAME: eco-neo-scan-postgresql-running
      DB_USERNAME: postgres
      DB_PASSWORD: postgres
      DB_DATABASE: neoscan_prodv
      REPLACE_OS_VARS: "true"
    entrypoint: bash -c "sleep 5 && /start.sh"
    networks:
      - neo_scan_internal
      - private_net

  eco-neo-scan-postgresql-running:
    image: postgres:10.5
    container_name: "eco-neo-scan-postgresql-running"
    expose:
      - 5432
    depends_on:
      - eco-neo-csharp-node1-running
      - eco-neo-csharp-node2-running
      - eco-neo-csharp-node3-running
      - eco-neo-csharp-node4-running
      - eco-neo-csharp-noderpc1-running
    environment:
      POSTGRES_DB: neoscan_prodv
      POSTGRES_PASSWORD: postgres
      POSTGRES_USER: postgres
    healthcheck:
      test:
        - CMD
        - bash
        - '-c'
        - exec 6<>/dev/tcp/127.0.0.1/5432
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 30s
    networks:
      - neo_scan_internal
adrienmo commented 5 years ago

Everything looks fine to me, do you have some logs of the sync container when it is getting stuck?

vncoelho commented 5 years ago

@andrienmo, thanks for the reply.

I do know the commands for the log very well...ahauaha Can you recommend a tutorial?If you could send me the commands.

adrienmo commented 5 years ago
docker logs --tail 200 -f eco-neo-scan-sync-running

This should output the last 200 lines of log and display any new line "live". I think you are naming the container so it should work, otherwise you can always do docker ps to check the name of the running container!

vncoelho commented 5 years ago

Anhhh, I got it, Adrienmo. Thus, really the logs.. ahauaha I thought it was inside another process of the container. I am going to investigate and return asap, if you want you can leave closed meanwhile.

adrienmo commented 5 years ago

If there is an issue I expect to go to the container logs, if none I would probably need to connect to the node to debug!

vncoelho commented 5 years ago
2019-04-22 13:43:56.637 [info] [neoscan_sync@127.0.0.1] 0.4 blocks/s, 0.4 transactions/s, 4239.5 download avg time, 8148.0 insert avg time
2019-04-22 13:43:56.637 [info] [neoscan_sync@127.0.0.1] 0 ratio_insert, 0 ratio_download
2019-04-22 13:44:01.638 [info] [neoscan_sync@127.0.0.1] 0.6 blocks/s, 0.6 transactions/s, 7835.333333333333 download avg time, 5417.0 insert avg time
2019-04-22 13:44:01.638 [info] [neoscan_sync@127.0.0.1] 0 ratio_insert, 0 ratio_download
2019-04-22 13:44:06.640 [info] [neoscan_sync@127.0.0.1] 0.4 blocks/s, 0.4 transactions/s, 3772.5 download avg time, 2938.5 insert avg time
2019-04-22 13:44:06.640 [info] [neoscan_sync@127.0.0.1] 0 ratio_insert, 0 ratio_download
2019-04-22 13:44:11.640 [info] [neoscan_sync@127.0.0.1] 0.4 blocks/s, 0.4 transactions/s, 4144.5 download avg time, 3604.5 insert avg time
2019-04-22 13:44:11.640 [info] [neoscan_sync@127.0.0.1] 0 ratio_insert, 0 ratio_download
2019-04-22 13:44:16.641 [info] [neoscan_sync@127.0.0.1] 0.2 blocks/s, 0.2 transactions/s, 2290.0 download avg time, 2732.0 insert avg time
2019-04-22 13:44:16.641 [info] [neoscan_sync@127.0.0.1] 0 ratio_insert, 0 ratio_download
2019-04-22 13:45:14.559 [error] [neoscan_sync@127.0.0.1] error while downloading block {991, :exit, {:timeout, {Task.Supervised, :stream, [60000]}}}
2019-04-22 13:46:14.564 [error] [neoscan_sync@127.0.0.1] error while downloading block {991, :exit, {:timeout, {Task.Supervised, :stream, [60000]}}}
(...)
vncoelho commented 5 years ago

@adrienmo, I forgot to mention, it usually happens in environments with low computational resources. When the available amount of resource is higher it tends to happen a little bit less often.

adrienmo commented 5 years ago

After getting this error, does it resume or does it stay stuck?

vncoelho commented 5 years ago

Lost in the jungle.

jeroenLu commented 5 years ago

After getting this error, does it resume or does it stay stuck?

I got the same error while running my NEO private net and NEO scan. NEO Scan is stuck after the error.

khanhj commented 5 years ago

i got the same error

adrienmo commented 5 years ago

@vncoelho can you retry with this version: 62150786-better-logging ? I have added some Logs, so if it fails we would have a stack trace

vncoelho commented 5 years ago

Both API and Sync, right? registry.gitlab.com/cityofzion/neo-scan/sync:62150786-better-logging registry.gitlab.com/cityofzion/neo-scan/api:62150786-better-logging

https://neocompiler.io/ has been updated. As soon as we have the log we gonna post it here. Thanks,

adrienmo commented 5 years ago

yes correct, but actually only sync is useful

vncoelho commented 5 years ago

@adrienmo , apart of logging better, did it fix anything or modified something? The error is not appearing until now. But as previously sad, it happens under specific conditions, sometimes more often when computational resources are lower.

adrienmo commented 5 years ago

@vncoelho no it does not modify anything... It would just print out the stacktrace if it cannot download block fails (I suspect a parsing issue) but maybe this was the culprit: https://github.com/CityOfZion/neo-scan/pull/408 ?

vncoelho commented 5 years ago

Maybe it was these commits here: image.

But I just got another stuck, I will open another issue, because it is different.

vncoelho commented 5 years ago

@adrienmo, I suspect that the problem was fixed, neoscan has not been stucking since upgrade to better-logging

adrienmo commented 5 years ago

@vncoelho OK, I will merge the branch and close the issue. On a side note, maybe it will be interesting for you, I created a new version of the docker image, that combines sync and api, so you only need one container instead of two. It is useful if you don't need to scale the API to several container which I guess is your case:

registry.gitlab.com/cityofzion/neo-scan/full:latest

vncoelho commented 5 years ago

Thanksssss, @adrienmo. As soon as possible I will use this image.

But in general you do not use the tag latest, we prefer to have strictly control on the version. Can you please share the link to this images? If not already there, maybe it could be in the README.

Thanks for everything, my friend. And congratulations for the great job you are all doing.

We hope we can keep improving minor things in the next coming months. Neo 3.0 will give us many opportunities, neoscan is almost already prepared to all of them but we can even take more profit of the possibilities.

adrienmo commented 5 years ago

@vncoelho latest tag is this one: 63461477-master

vncoelho commented 5 years ago

Updated to full and it is working as expected!

Thanks for keeping this image updated, @adrienmo.