tronprotocol / java-tron

Java implementation of the Tron whitepaper
GNU Lesser General Public License v3.0
3.71k stars 1.4k forks source link

Node not syncing post mandatory upgrade #5807

Closed bmalepaty closed 3 months ago

bmalepaty commented 5 months ago

I tried setting up a Tron Node through docker container version: "3.7" services: node: image: tronprotocol/java-tron:GreatVoyage-v4.7.4 restart: on-failure ports:

On starting this container, block has stopped syncing at 60348877. Can you please suggest on next steps?

Lin-MayH commented 4 months ago

I have the same problem, how to solve it?

forfreeday commented 4 months ago

@bmalepaty @Lin-MayH Any more logs? You did not specify a configuration file in your startup command, you can refer to this command:

docker run -d --name="java-tron-4.7.4" \
-p 8090:8090 \
-p 18888:18888 \
-p 50051:50051 \
--restart always  \
tronprotocol/java-tron:GreatVoyage-v4.7.4 \
-c /java-tron/config/main_net_config.conf
bmalepaty commented 4 months ago

As per suggestion, I added config file too. This is the new docker compose file

version: "3.7"
services:
  node:
    image: tronprotocol/java-tron:GreatVoyage-v4.7.4
    restart: on-failure
    ports:
      - "7332:8090"
      - "18888:18888"
      - "50051:50051"
    volumes:
      - /TronDisk/output-directory/output-directory:/java-tron/output-directory
      - /TronDisk/logs:/logs
      - /TronDisk/main_net_config.conf:/java-tron/main_net_config.conf
    command:
      - "-c"
      - "/java-tron/main_net_config.conf"
    container_name: tron_fullnode

Node is up but not syncing further 60348877

Logs are empty: image

Lin-MayH commented 4 months ago

@bmalepaty @Lin-MayH Any more logs? You did not specify a configuration file in your startup command, you can refer to this command:

docker run -d --name="java-tron-4.7.4" \
-p 8090:8090 \
-p 18888:18888 \
-p 50051:50051 \
--restart always  \
tronprotocol/java-tron:GreatVoyage-v4.7.4 \
-c /java-tron/config/main_net_config.conf

Peer /34.254.202.252:18888 connect time: 30s [447ms] last know block num: 0 needSyncFromPeer:true needSyncFromUs:false syncToFetchSize:4000 syncToFetchSizePeekNum:61076290 syncBlockRequestedSize:24 remainNum:49527 syncChainRequested:0 blockInProcess:576

Peer /44.208.138.167:18888 connect time: 25s [453ms] last know block num: 0 needSyncFromPeer:true needSyncFromUs:false syncToFetchSize:4000 syncToFetchSizePeekNum:61076290 syncBlockRequestedSize:30 remainNum:52355 syncChainRequested:0 blockInProcess:470

Keep syncing 61076290 block

Lin-MayH commented 4 months ago

As per suggestion, I added config file too. This is the new docker compose file

version: "3.7"
services:
  node:
    image: tronprotocol/java-tron:GreatVoyage-v4.7.4
    restart: on-failure
    ports:
      - "7332:8090"
      - "18888:18888"
      - "50051:50051"
    volumes:
      - /TronDisk/output-directory/output-directory:/java-tron/output-directory
      - /TronDisk/logs:/logs
      - /TronDisk/main_net_config.conf:/java-tron/main_net_config.conf
    command:
      - "-c"
      - "/java-tron/main_net_config.conf"
    container_name: tron_fullnode

Node is up but not syncing further 60348877

Logs are empty: image

@bmalepaty @Lin-MayH Any more logs? You did not specify a configuration file in your startup command, you can refer to this command:

docker run -d --name="java-tron-4.7.4" \
-p 8090:8090 \
-p 18888:18888 \
-p 50051:50051 \
--restart always  \
tronprotocol/java-tron:GreatVoyage-v4.7.4 \
-c /java-tron/config/main_net_config.conf

more log:

19:40:26.913 INFO [peerClient-10] net Receive message from peer: /95.217.62.144:18888, type: BLOCK

at org.tron.core.net.service.sync.SyncService.handleSyncBlock(SyncService.java:269)
at org.tron.core.net.service.sync.SyncService.lambda$init$1(SyncService.java:88)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

19:40:27.035 INFO [sync-handle-block] DB Pending tx size: 0. 19:40:27.035 ERROR [sync-handle-block] net Process block failed, Num:61076290,ID:0000000003a3f34237d3548b746165dcaf881b4f864d4dc7c55bdabe42630e03, reason: Validate TransferContract error, balance is not sufficient. 19:40:27.035 ERROR [sync-handle-block] net Process sync block Num:61076290,ID:0000000003a3f34237d3548b746165dcaf881b4f864d4dc7c55bdabe42630e03 failed, type: 10, bad block 19:40:27.036 INFO [sync-handle-block] net Send peer /159.138.92.82:35172 message type: P2P_DISCONNECT reason: BAD_BLOCK 19:40:27.036 INFO [sync-handle-block] net Send peer /15.235.160.230:18888 message type: P2P_DISCONNECT reason: BAD_BLOCK 19:40:27.036 INFO [sync-handle-block] net Send peer /95.217.62.144:18888 message type: P2P_DISCONNECT reason: BAD_BLOCK 19:40:27.036 INFO [peerClient-14] net Close channel:/15.235.160.230:18888 19:40:27.036 INFO [peerClient-14] net Peer stats: channels 2, activePeers 2, active 1, passive 1 19:40:27.036 INFO [peerClient-10] net Close channel:/95.217.62.144:18888 19:40:27.036 INFO [peerClient-10] net Peer stats: channels 1, activePeers 1, active 0, passive 1

forfreeday commented 4 months ago

@Lin-MayH Are you using LiteFullNode for your data? What date is used?

19:40:27.035 ERROR [sync-handle-block] net Process block failed, Num:61076290,ID:0000000003a3f34237d3548b746165dcaf881b4f864d4dc7c55bdabe42630e03, reason: Validate TransferContract error, balance is not sufficient.
19:40:27.035 ERROR [sync-handle-block] net Process sync block Num:61076290,ID:0000000003a3f34237d3548b746165dcaf881b4f864d4dc7c55bdabe42630e03 failed, type: 10, bad block
forfreeday commented 4 months ago

@bmalepaty You can try using this command: docker logs -f -t --tail 1000f <container_id>

Lin-MayH commented 4 months ago

@Lin-MayH Are you using LiteFullNode for your data? What date is used?

19:40:27.035 ERROR [sync-handle-block] net Process block failed, Num:61076290,ID:0000000003a3f34237d3548b746165dcaf881b4f864d4dc7c55bdabe42630e03, reason: Validate TransferContract error, balance is not sufficient.
19:40:27.035 ERROR [sync-handle-block] net Process sync block Num:61076290,ID:0000000003a3f34237d3548b746165dcaf881b4f864d4dc7c55bdabe42630e03 failed, type: 10, bad block

Yes.I using LiteFullNode data link:http://3.219.199.168/backup20240423/LiteFullNode_output-directory.tgz

forfreeday commented 4 months ago

@Lin-MayH balance is not sufficient, maybe the database is corrupted, try to replace the database again.

Lin-MayH commented 4 months ago

@Lin-MayH balance is not sufficient, maybe the database is corrupted, try to replace the database again.

Okay, originally I was using version 4.7.3.1. I just need to update the latest version

forfreeday commented 4 months ago

Can you provide a detailed description of your upgrade process, whether there was a forced shutdown of Docker during the process, and whether this issue still persists after upgrading to version 4.7.4.

adityaNirvana commented 4 months ago

I am too facing issues with the latest version, while trying syncing fullnode. I am running using jar directly, and not with docker.

Commad: $ java -Xmx24g -XX:+UseConcMarkSweepGC -jar FullNode.jar -c main_net_config.conf

net {
  type = mainnet
  # type = testnet
}

storage {
  # Directory for storing persistent data
  db.engine = "LEVELDB",
  db.sync = false,
  db.directory = "database",
  index.directory = "index",
  transHistory.switch = "on",
  # You can custom these 14 databases' configs:

  # account, account-index, asset-issue, block, block-index,
  # block_KDB, peers, properties, recent-block, trans,
  # utxo, votes, witness, witness_schedule.

  # Otherwise, db configs will remain default and data will be stored in
  # the path of "output-directory" or which is set by "-d" ("--output-directory").
  # setting can impove leveldb performance .... start
  # node: if this will increase process fds,you may be check your ulimit if 'too many open files' error occurs
  # see https://github.com/tronprotocol/tips/blob/master/tip-343.md for detail
  # if you find block sync has lower performance,you can try  this  settings
  #default = {
  #  maxOpenFiles = 100
  #}
  #defaultM = {
  #  maxOpenFiles = 500
  #}
  #defaultL = {
  #  maxOpenFiles = 1000
  #}
  # setting can impove leveldb performance .... end
  # Attention: name is a required field that must be set !!!
  properties = [
    //    {
    //      name = "account",
    //      path = "storage_directory_test",
    //      createIfMissing = true,
    //      paranoidChecks = true,
    //      verifyChecksums = true,
    //      compressionType = 1,        // compressed with snappy
    //      blockSize = 4096,           // 4  KB =         4 * 1024 B
    //      writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
    //      cacheSize = 10485760,       // 10 MB = 10 * 1024 * 1024 B
    //      maxOpenFiles = 100
    //    },
    //    {
    //      name = "account-index",
    //      path = "storage_directory_test",
    //      createIfMissing = true,
    //      paranoidChecks = true,
    //      verifyChecksums = true,
    //      compressionType = 1,        // compressed with snappy
    //      blockSize = 4096,           // 4  KB =         4 * 1024 B
    //      writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
    //      cacheSize = 10485760,       // 10 MB = 10 * 1024 * 1024 B
    //      maxOpenFiles = 100
    //    },
  ]

  needToUpdateAsset = true

  //dbsettings is needed when using rocksdb as the storage implement (db.engine="ROCKSDB").
  //we'd strongly recommend that do not modify it unless you know every item's meaning clearly.
  dbSettings = {
    levelNumber = 7
    //compactThreads = 32
    blocksize = 64  // n * KB
    maxBytesForLevelBase = 256  // n * MB
    maxBytesForLevelMultiplier = 10
    level0FileNumCompactionTrigger = 4
    targetFileSizeBase = 256  // n * MB
    targetFileSizeMultiplier = 1
  }

  //backup settings when using rocks db as the storage implement (db.engine="ROCKSDB").
  //if you want to use the backup plugin, please confirm set the db.engine="ROCKSDB" above.
  backup = {
    enable = false  // indicate whether enable the backup plugin
    propPath = "prop.properties" // record which bak directory is valid
    bak1path = "bak1/database" // you must set two backup directories to prevent application halt unexpected(e.g. kill -9).
    bak2path = "bak2/database"
    frequency = 10000   // indicate backup db once every 10000 blocks processed.
  }

  balance.history.lookup = false

  # checkpoint.version = 2
  # checkpoint.sync = true

  # the estimated number of block transactions (default 1000, min 100, max 10000).
  # so the total number of cached transactions is 65536 * txCache.estimatedTransactions
  # txCache.estimatedTransactions = 1000

  # data root setting, for check data, currently, only reward-vi is used.

#   merkleRoot = {
#   reward-vi = 9debcb9924055500aaae98cdee10501c5c39d4daa75800a996f4bdda73dbccd8 // main-net, Sha256Hash, hexString
#   }
}

node.discovery = {
  enable = true
  persist = true
}

# custom stop condition
#node.shutdown = {
#  BlockTime  = "54 59 08 * * ?" # if block header time in persistent db matched.
#  BlockHeight = 33350800 # if block header height in persistent db matched.
#  BlockCount = 12 # block sync count after node start.
#}

node.backup {
  # udp listen port, each member should have the same configuration
  port = 10001

  # my priority, each member should use different priority
  priority = 8

  # time interval to send keepAlive message, each member should have the same configuration
  keepAliveInterval = 3000

  # peer's ip list, can't contain mine
  members = [
    # "ip",
    # "ip"
  ]
}

crypto {
  engine = "eckey"
}
# prometheus metrics start
node.metrics = {
 prometheus{
   enable=true
   port="9527"
 }
}

# prometheus metrics end

node {
  # trust node for solidity node
  # trustNode = "ip:port"
  trustNode = "127.0.0.1:50051"

  # expose extension api to public or not
  walletExtensionApi = true

  listen.port = 18888

  connection.timeout = 2

  fetchBlock.timeout = 200

  tcpNettyWorkThreadNum = 0

  udpNettyWorkThreadNum = 1

  # Number of validate sign thread, default availableProcessors / 2
  # validateSignThreadNum = 16

  maxConnections = 30

  minConnections = 8

  minActiveConnections = 3

  maxConnectionsWithSameIp = 2

  maxHttpConnectNumber = 50

  minParticipationRate = 15

  isOpenFullTcpDisconnect = false

  p2p {
    version = 11111
  }

  active = [
    # Active establish connection in any case
    # Sample entries:
    # "ip:port",
    # "ip:port"
  ]

  passive = [
    # Passive accept connection in any case
    # Sample entries:
    # "ip:port",
    # "ip:port"
  ]

  fastForward = [
    "100.26.245.209:18888",
    "15.188.6.125:18888"
  ]

  http {
    fullNodeEnable = true
    fullNodePort = 8090
    solidityEnable = true
    solidityPort = 8091
  }

  rpc {
    port = 50051
    solidityPort = 50061
    # Number of gRPC thread, default availableProcessors / 2
    # thread = 16

    # The maximum number of concurrent calls permitted for each incoming connection
    # maxConcurrentCallsPerConnection =

    # The HTTP/2 flow control window, default 1MB
    # flowControlWindow =

    # Connection being idle for longer than which will be gracefully terminated
    maxConnectionIdleInMillis = 60000

    # Connection lasting longer than which will be gracefully terminated
    # maxConnectionAgeInMillis =

    # The maximum message size allowed to be received on the server, default 4MB
    # maxMessageSize =

    # The maximum size of header list allowed to be received, default 8192
    # maxHeaderListSize =

    # Transactions can only be broadcast if the number of effective connections is reached.
    minEffectiveConnection = 1

    # The switch of the reflection service, effective for all gRPC services
    # reflectionService = true
  }

  # number of solidity thread in the FullNode.
  # If accessing solidity rpc and http interface timeout, could increase the number of threads,
  # The default value is the number of cpu cores of the machine.
  #solidity.threads = 8

  # Limits the maximum percentage (default 75%) of producing block interval
  # to provide sufficient time to perform other operations e.g. broadcast block
  # blockProducedTimeOut = 75

  # Limits the maximum number (default 700) of transaction from network layer
  # netMaxTrxPerSecond = 700

  # Whether to enable the node detection function, default false
  # nodeDetectEnable = false

  # use your ipv6 address for node discovery and tcp connection, default false
  # enableIpv6 = false

  # if your node's highest block num is below than all your pees', try to acquire new connection. default false
  # effectiveCheckEnable = false

  # Dynamic loading configuration function, disabled by default
  # dynamicConfig = {
    # enable = false
    # Configuration file change check interval, default is 600 seconds
    # checkInterval = 600
  # }

  dns {
    # dns urls to get nodes, url format tree://{pubkey}@{domain}, default empty
    treeUrls = [
      #"tree://AKMQMNAJJBL73LXWPXDI4I5ZWWIZ4AWO34DWQ636QOBBXNFXH3LQS@main.trondisco.net", //offical dns tree
    ]

    # enable or disable dns publish, default false
    # publish = false

    # dns domain to publish nodes, required if publish is true
    # dnsDomain = "nodes1.example.org"

    # dns private key used to publish, required if publish is true, hex string of length 64
    # dnsPrivate = "b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"

    # known dns urls to publish if publish is true, url format tree://{pubkey}@{domain}, default empty
    # knownUrls = [
    #"tree://APFGGTFOBVE2ZNAB3CSMNNX6RRK3ODIRLP2AA5U4YFAA6MSYZUYTQ@nodes2.example.org",
    # ]

    # staticNodes = [
    # static nodes to published on dns
    # Sample entries:
    # "ip:port",
    # "ip:port"
    # ]

    # merge several nodes into a leaf of tree, should be 1~5
    # maxMergeSize = 5

    # only nodes change percent is bigger then the threshold, we update data on dns
    # changeThreshold = 0.1

    # dns server to publish, required if publish is true, only aws or aliyun is support
    # serverType = "aws"

    # access key id of aws or aliyun api, required if publish is true, string
    # accessKeyId = "your-key-id"

    # access key secret of aws or aliyun api, required if publish is true, string
    # accessKeySecret = "your-key-secret"

    # if publish is true and serverType is aliyun, it's endpoint of aws dns server, string
    # aliyunDnsEndpoint = "alidns.aliyuncs.com"

    # if publish is true and serverType is aws, it's region of aws api, such as "eu-south-1", string
    # awsRegion = "us-east-1"

    # if publish is true and server-type is aws, it's host zone id of aws's domain, string
    # awsHostZoneId = "your-host-zone-id"
  }

  # open the history query APIs(http&GRPC) when node is a lite fullNode,
  # like {getBlockByNum, getBlockByID, getTransactionByID...}.
  # default: false.
  # note: above APIs may return null even if blocks and transactions actually are on the blockchain
  # when opening on a lite fullnode. only open it if the consequences being clearly known
  # openHistoryQueryWhenLiteFN = false

  jsonrpc {
    # Note: If you turn on jsonrpc and run it for a while and then turn it off, you will not
    # be able to get the data from eth_getLogs for that period of time.

    httpFullNodeEnable = true
    httpFullNodePort = 8545
    httpSolidityEnable = true
    httpSolidityPort = 8555
    # httpPBFTEnable = true
    # httpPBFTPort = 8565
  }

  # Disabled api list, it will work for http, rpc and pbft, both fullnode and soliditynode,
  # but not jsonrpc.
  # Sample: The setting is case insensitive, GetNowBlock2 is equal to getnowblock2
  #
  # disabledApi = [
  #   "getaccount",
  #   "getnowblock2"
  # ]
}

## rate limiter config
rate.limiter = {
  # Every api could be set a specific rate limit strategy. Three strategy are supported:GlobalPreemptibleAdapter、IPQPSRateLimiterAdapte、QpsRateLimiterAdapter
  # GlobalPreemptibleAdapter: permit is the number of preemptible resource, every client must apply one resourse
  #       before do the request and release the resource after got the reponse automaticlly. permit should be a Integer.
  # QpsRateLimiterAdapter: qps is the average request count in one second supported by the server, it could be a Double or a Integer.
  # IPQPSRateLimiterAdapter: similar to the QpsRateLimiterAdapter, qps could be a Double or a Integer.
  # If do not set, the "default strategy" is set.The "default startegy" is based on QpsRateLimiterAdapter, the qps is set as 10000.
  #
  # Sample entries:
  #
  http = [
    #  {
    #    component = "GetNowBlockServlet",
    #    strategy = "GlobalPreemptibleAdapter",
    #    paramString = "permit=1"
    #  },

    #  {
    #    component = "GetAccountServlet",
    #    strategy = "IPQPSRateLimiterAdapter",
    #    paramString = "qps=1"
    #  },

    #  {
    #    component = "ListWitnessesServlet",
    #    strategy = "QpsRateLimiterAdapter",
    #    paramString = "qps=1"
    #  }
  ],

  rpc = [
    #  {
    #    component = "protocol.Wallet/GetBlockByLatestNum2",
    #    strategy = "GlobalPreemptibleAdapter",
    #    paramString = "permit=1"
    #  },

    #  {
    #    component = "protocol.Wallet/GetAccount",
    #    strategy = "IPQPSRateLimiterAdapter",
    #    paramString = "qps=1"
    #  },

    #  {
    #    component = "protocol.Wallet/ListWitnesses",
    #    strategy = "QpsRateLimiterAdapter",
    #    paramString = "qps=1"
    #  },
  ]

  # global qps, default 50000
  # global.qps = 50000
  # IP-based global qps, default 10000
  # global.ip.qps = 10000
}

seed.node = {
  # List of the seed nodes
  # Seed nodes are stable full nodes
  # example:
  # ip.list = [
  #   "ip:port",
  #   "ip:port"
  # ]
  ip.list = [
    "3.225.171.164:18888",
    "52.53.189.99:18888",
    "18.196.99.16:18888",
    "34.253.187.192:18888",
    "18.133.82.227:18888",
    "35.180.51.163:18888",
    "54.252.224.209:18888",
    "18.231.27.82:18888",
    "52.15.93.92:18888",
    "34.220.77.106:18888",
    "15.207.144.3:18888",
    "13.124.62.58:18888",    
    "54.151.226.240:18888",
    "35.174.93.198:18888",
    "18.210.241.149:18888",
    "54.177.115.127:18888",
    "54.254.131.82:18888",
    "18.167.171.167:18888",
    "54.167.11.177:18888",
    "35.74.7.196:18888",
    "52.196.244.176:18888",
    "54.248.129.19:18888",
    "43.198.142.160:18888",
    "3.0.214.7:18888",
    "54.153.59.116:18888",
    "54.153.94.160:18888",
    "54.82.161.39:18888",
    "54.179.207.68:18888",
    "18.142.82.44:18888",
    "18.163.230.203:18888"
  ]
}

genesis.block = {
  # Reserve balance
  assets = [
    {
      accountName = "Zion"
      accountType = "AssetIssue"
      address = "TLLM21wteSPs4hKjbxgmH1L6poyMjeTbHm"
      balance = "99000000000000000"
    },
    {
      accountName = "Sun"
      accountType = "AssetIssue"
      address = "TXmVpin5vq5gdZsciyyjdZgKRUju4st1wM"
      balance = "0"
    },
    {
      accountName = "Blackhole"
      accountType = "AssetIssue"
      address = "TLsV52sRDL79HXGGm9yzwKibb6BeruhUzy"
      balance = "-9223372036854775808"
    }
  ]

  witnesses = [
    {
      address: THKJYuUmMKKARNf7s2VT51g5uPY6KEqnat,
      url = "http://GR1.com",
      voteCount = 100000026
    },
    {
      address: TVDmPWGYxgi5DNeW8hXrzrhY8Y6zgxPNg4,
      url = "http://GR2.com",
      voteCount = 100000025
    },
    {
      address: TWKZN1JJPFydd5rMgMCV5aZTSiwmoksSZv,
      url = "http://GR3.com",
      voteCount = 100000024
    },
    {
      address: TDarXEG2rAD57oa7JTK785Yb2Et32UzY32,
      url = "http://GR4.com",
      voteCount = 100000023
    },
    {
      address: TAmFfS4Tmm8yKeoqZN8x51ASwdQBdnVizt,
      url = "http://GR5.com",
      voteCount = 100000022
    },
    {
      address: TK6V5Pw2UWQWpySnZyCDZaAvu1y48oRgXN,
      url = "http://GR6.com",
      voteCount = 100000021
    },
    {
      address: TGqFJPFiEqdZx52ZR4QcKHz4Zr3QXA24VL,
      url = "http://GR7.com",
      voteCount = 100000020
    },
    {
      address: TC1ZCj9Ne3j5v3TLx5ZCDLD55MU9g3XqQW,
      url = "http://GR8.com",
      voteCount = 100000019
    },
    {
      address: TWm3id3mrQ42guf7c4oVpYExyTYnEGy3JL,
      url = "http://GR9.com",
      voteCount = 100000018
    },
    {
      address: TCvwc3FV3ssq2rD82rMmjhT4PVXYTsFcKV,
      url = "http://GR10.com",
      voteCount = 100000017
    },
    {
      address: TFuC2Qge4GxA2U9abKxk1pw3YZvGM5XRir,
      url = "http://GR11.com",
      voteCount = 100000016
    },
    {
      address: TNGoca1VHC6Y5Jd2B1VFpFEhizVk92Rz85,
      url = "http://GR12.com",
      voteCount = 100000015
    },
    {
      address: TLCjmH6SqGK8twZ9XrBDWpBbfyvEXihhNS,
      url = "http://GR13.com",
      voteCount = 100000014
    },
    {
      address: TEEzguTtCihbRPfjf1CvW8Euxz1kKuvtR9,
      url = "http://GR14.com",
      voteCount = 100000013
    },
    {
      address: TZHvwiw9cehbMxrtTbmAexm9oPo4eFFvLS,
      url = "http://GR15.com",
      voteCount = 100000012
    },
    {
      address: TGK6iAKgBmHeQyp5hn3imB71EDnFPkXiPR,
      url = "http://GR16.com",
      voteCount = 100000011
    },
    {
      address: TLaqfGrxZ3dykAFps7M2B4gETTX1yixPgN,
      url = "http://GR17.com",
      voteCount = 100000010
    },
    {
      address: TX3ZceVew6yLC5hWTXnjrUFtiFfUDGKGty,
      url = "http://GR18.com",
      voteCount = 100000009
    },
    {
      address: TYednHaV9zXpnPchSywVpnseQxY9Pxw4do,
      url = "http://GR19.com",
      voteCount = 100000008
    },
    {
      address: TCf5cqLffPccEY7hcsabiFnMfdipfyryvr,
      url = "http://GR20.com",
      voteCount = 100000007
    },
    {
      address: TAa14iLEKPAetX49mzaxZmH6saRxcX7dT5,
      url = "http://GR21.com",
      voteCount = 100000006
    },
    {
      address: TBYsHxDmFaRmfCF3jZNmgeJE8sDnTNKHbz,
      url = "http://GR22.com",
      voteCount = 100000005
    },
    {
      address: TEVAq8dmSQyTYK7uP1ZnZpa6MBVR83GsV6,
      url = "http://GR23.com",
      voteCount = 100000004
    },
    {
      address: TRKJzrZxN34YyB8aBqqPDt7g4fv6sieemz,
      url = "http://GR24.com",
      voteCount = 100000003
    },
    {
      address: TRMP6SKeFUt5NtMLzJv8kdpYuHRnEGjGfe,
      url = "http://GR25.com",
      voteCount = 100000002
    },
    {
      address: TDbNE1VajxjpgM5p7FyGNDASt3UVoFbiD3,
      url = "http://GR26.com",
      voteCount = 100000001
    },
    {
      address: TLTDZBcPoJ8tZ6TTEeEqEvwYFk2wgotSfD,
      url = "http://GR27.com",
      voteCount = 100000000
    }
  ]

  timestamp = "0" #2017-8-26 12:00:00

  parentHash = "0xe58f33f9baf9305dc6f82b9f1934ea8f0ade2defb951258d50167028c780351f"
}

// Optional.The default is empty.
// It is used when the witness account has set the witnessPermission.
// When it is not empty, the localWitnessAccountAddress represents the address of the witness account,
// and the localwitness is configured with the private key of the witnessPermissionAddress in the witness account.
// When it is empty,the localwitness is configured with the private key of the witness account.

//localWitnessAccountAddress =

localwitness = [
]

#localwitnesskeystore = [
#  "localwitnesskeystore.json"
#]

block = {
  needSyncCheck = true
  maintenanceTimeInterval = 21600000
  proposalExpireTime = 259200000 // 3 day: 259200000(ms)
}

# Transaction reference block, default is "solid", configure to "head" may incur TaPos error
# trx.reference.block = "solid" // head;solid;

# This property sets the number of milliseconds after the creation of the transaction that is expired, default value is  60000.
# trx.expiration.timeInMilliseconds = 60000

vm = {
  supportConstant = false
  maxEnergyLimitForConstant = 100000000
  minTimeRatio = 0.0
  maxTimeRatio = 20.0
  saveInternalTx = false

  # Indicates whether the node stores featured internal transactions, such as freeze, vote and so on
  # saveFeaturedInternalTx = false

  # In rare cases, transactions that will be within the specified maximum execution time (default 10(ms)) are re-executed and packaged
  # longRunningTime = 10

  # Indicates whether the node support estimate energy API.
  # estimateEnergy = false

  # Indicates the max retry time for executing transaction in estimating energy.
  # estimateEnergyMaxRetry = 3
}

committee = {
  allowCreationOfContracts = 0  //mainnet:0 (reset by committee),test:1
  allowAdaptiveEnergy = 0  //mainnet:0 (reset by committee),test:1
}

event.subscribe = {
    native = {
      useNativeQueue = true // if true, use native message queue, else use event plugin.
      bindport = 5555 // bind port
      sendqueuelength = 1000 //max length of send queue
    }

    path = "" // absolute path of plugin
    server = "" // target server address to receive event triggers
    // dbname|username|password, if you want to create indexes for collections when the collections
    // are not exist, you can add version and set it to 2, as dbname|username|password|version
    // if you use version 2 and one collection not exists, it will create index automaticaly;
    // if you use version 2 and one collection exists, it will not create index, you must create index manually;
    dbconfig = ""
    contractParse = true
    topics = [
        {
          triggerName = "block" // block trigger, the value can't be modified
          enable = false
          topic = "block" // plugin topic, the value could be modified
          solidified = false // if set true, just need solidified block, default is false
        },
        {
          triggerName = "transaction"
          enable = false
          topic = "transaction"
          solidified = false
          ethCompatible = false // if set true, add transactionIndex, cumulativeEnergyUsed, preCumulativeLogCount, logList, energyUnitPrice, default is false
        },
        {
          triggerName = "contractevent"
          enable = false
          topic = "contractevent"
        },
        {
          triggerName = "contractlog"
          enable = false
          topic = "contractlog"
          redundancy = false // if set true, contractevent will also be regarded as contractlog
        },
        {
          triggerName = "solidity" // solidity block trigger(just include solidity block number and timestamp), the value can't be modified
          enable = true            // the default value is true
          topic = "solidity"
        },
        {
          triggerName = "solidityevent"
          enable = false
          topic = "solidityevent"
        },
        {
          triggerName = "soliditylog"
          enable = false
          topic = "soliditylog"
          redundancy = false // if set true, solidityevent will also be regarded as soliditylog
        }
    ]

    filter = {
       fromblock = "" // the value could be "", "earliest" or a specified block number as the beginning of the queried range
       toblock = "" // the value could be "", "latest" or a specified block number as end of the queried range
       contractAddress = [
           "" // contract address you want to subscribe, if it's set to "", you will receive contract logs/events with any contract address.
       ]

       contractTopic = [
           "" // contract topic you want to subscribe, if it's set to "", you will receive contract logs/events with any contract topic.
       ]
    }
}

This is my conf file

Logs:

07:58:56.287 WARN  [peerClient-22] [net](PeerClient.java:72) Connect to peer /223.85.53.77:18888 fail, cause:connection timed out: /223.85.53.77:18888
07:58:56.288 INFO  [peerClient-22] [net](P2pChannelInitializer.java:45) Close channel:null
07:58:56.288 WARN  [peerClient-22] [net](ChannelManager.java:90) Notify Disconnect peer has no address.
07:58:56.334 INFO  [peerClient-26] [net](ChannelManager.java:214) Receive message from channel: /65.108.237.46:18888, [HelloMessage: from {
  address: "65.108.237.46"
  port: 18888
  nodeId: "\035\221\263\245D\b\203\233\\\337\251i\211\365!k\337@\n\303\366\351\035Q\003\330(WS-\354\323A8\360\314\003LK\243\276\v\254\262\bR\230\017\243\202:R\256\206~\257\341_\250PB\267\253\215"
}
network_id: 11111
code: 1
timestamp: 1714723136011
version: 1

07:58:56.334 INFO  [peerClient-26] [net](ChannelManager.java:148) Add peer /65.108.237.46:18888, total channels: 1
07:58:56.334 INFO  [peerClient-26] [net](HandshakeService.java:58) Handshake failed /65.108.237.46:18888, code: 1, reason: TOO_MANY_PEERS, networkId: 11111, version: 1
07:58:56.334 INFO  [peerClient-26] [net](ChannelManager.java:178) Try to close channel: /65.108.237.46:18888, reason: TOO_MANY_PEERS
07:58:56.334 INFO  [peerClient-26] [net](P2pChannelInitializer.java:45) Close channel:/65.108.237.46:18888
07:58:56.334 INFO  [peerClient-26] [net](ConnPoolService.java:261) Peer stats: channels 0, activePeers 0, active 0, passive 0
07:58:56.345 INFO  [peerClient-27] [net](ChannelManager.java:214) Receive message from channel: /5.9.68.240:18888, [HelloMessage: from {
  address: "5.9.68.240"
  port: 18888
  nodeId: "\030\374~\273]\337\034\340\325\214\2102`U\377`\323\323)\3328\"\2044\006\3568\372f\207]\235\226cPN\233\325\206\004TQ\226]}\0010\3643\350\266\300\217Vi@\314f)\342\233\317(\365"
}
network_id: 11111
code: 1
timestamp: 1714723136031
version: 1

07:58:56.345 INFO  [peerClient-27] [net](ChannelManager.java:148) Add peer /5.9.68.240:18888, total channels: 1
07:58:56.345 INFO  [peerClient-27] [net](HandshakeService.java:58) Handshake failed /5.9.68.240:18888, code: 1, reason: TOO_MANY_PEERS, networkId: 11111, version: 1
07:58:56.345 INFO  [peerClient-27] [net](ChannelManager.java:178) Try to close channel: /5.9.68.240:18888, reason: TOO_MANY_PEERS
07:58:56.345 INFO  [peerClient-27] [net](P2pChannelInitializer.java:45) Close channel:/5.9.68.240:18888
07:58:56.345 INFO  [peerClient-27] [net](ConnPoolService.java:261) Peer stats: channels 0, activePeers 0, active 0, passive 0
07:58:56.377 INFO  [peerClient-24] [net](ChannelManager.java:214) Receive message from channel: /188.187.190.7:18888, [HelloMessage: from {
  address: "188.187.190.7"
  port: 18888
  nodeId: "\362\r\320\304\251\275\271+\232\347\001T*\024\2725J\312\323\262Dc\305\234\206\023\0364]4a\305\006#U\247\342\037\255 <\347q\0235\226\342\277\276\216\354J\324\265\203\266?w\371\350<\236\272P"
}
network_id: 11111
code: 1
timestamp: 1714723135969
version: 1

07:58:56.377 INFO  [peerClient-24] [net](ChannelManager.java:148) Add peer /188.187.190.7:18888, total channels: 1
07:58:56.377 INFO  [peerClient-24] [net](HandshakeService.java:58) Handshake failed /188.187.190.7:18888, code: 1, reason: TOO_MANY_PEERS, networkId: 11111, version: 1
07:58:56.377 INFO  [peerClient-24] [net](ChannelManager.java:178) Try to close channel: /188.187.190.7:18888, reason: TOO_MANY_PEERS
07:58:56.377 INFO  [peerClient-24] [net](P2pChannelInitializer.java:45) Close channel:/188.187.190.7:18888
07:58:56.377 INFO  [peerClient-24] [net](ConnPoolService.java:261) Peer stats: channels 0, activePeers 0, active 0, passive 0
07:58:56.397 INFO  [peerClient-28] [net](P2pChannelInitializer.java:45) Close channel:/45.11.56.243:18888
07:58:56.397 INFO  [peerClient-28] [net](ConnPoolService.java:261) Peer stats: channels 0, activePeers 0, active 0, passive 0

Also, I can see this issues in my logs:

11:51:18.362 ERROR [peerWorker-11] [net](Channel.java:108) Close peer /136.175.9.27:51634, exception caught
java.lang.NullPointerException: null
    at org.tron.core.net.message.handshake.HelloMessage.<init>(HelloMessage.java:39)
    at org.tron.core.net.service.handshake.HandshakeService.sendHelloMessage(HandshakeService.java:130)
    at org.tron.core.net.service.handshake.HandshakeService.startHandshake(HandshakeService.java:33)
    at org.tron.core.net.P2pEventHandlerImpl.onConnect(P2pEventHandlerImpl.java:106)
    at org.tron.p2p.connection.business.handshake.HandshakeService.lambda$processMessage$0(HandshakeService.java:82)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
    at org.tron.p2p.connection.business.handshake.HandshakeService.processMessage(HandshakeService.java:82)
    at org.tron.p2p.connection.ChannelManager.processMessage(ChannelManager.java:225)
    at org.tron.p2p.connection.socket.MessageHandler.decode(MessageHandler.java:51)
    at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
    at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
    at org.tron.p2p.stats.TrafficStats$TrafficStatHandler.channelRead(TrafficStats.java:36)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at java.base/java.lang.Thread.run(Thread.java:829)
11:51:18.362 INFO  [peerWorker-11] [net](P2pChannelInitializer.java:45) Close channel:/136.175.9.27:51634
11:51:18.362 INFO  [peerWorker-11] [net](ConnPoolService.java:261) Peer stats: channels 0, activePeers 0, active 0, passive 0

Please let me know if anyone have any leads here.

halibobo1205 commented 4 months ago

@adityaNirvana Try java -jar FullNode.jar -v. I guess you may be using a higher version of JDK, such as JDK11. Running java-tron requires 64-bit version of Oracle JDK 1.8 to be installed, other JDK versions are not supported yet.

adityaNirvana commented 4 months ago

@halibobo1205 , thanks. It helped!

halibobo1205 commented 4 months ago

@halibobo1205 , thanks. It helped!

@adityaNirvana Currently java-tron only supports jdk8, in order to avoid this kind of situation again, do you think it is necessary to add runtime environment detection? If the detection fails, the service exits.

uiayl commented 4 months ago

Java-tron version: 4.7.4 OS:Linux Running: start.sh+config.conf directly, without docker

I'm encountering syncing issues when using certain LiteFullNode data, the blocks will stop at certain height.

For data based on LiteFullNode0423(http://3.219.199.168/backup20240423/LiteFullNode_output-directory.tgz) starting from 04/23 , the data has synced successfully up till now. However, for previous data 0418 and 0420(http://3.219.199.168/backup20240420/LiteFullNode_output-directory.tgz), syncing stopped at blocks 60930076 and 61022526 respectively. Their logs were different.

It looks like the issues are similar to LinMayH's above, but upgrading java-tron to the latest version 4.7.4 didn't help.

Now I'm using data based on LiteFullNode0423, but I still can't fix the syncing with the previous data. It makes me very worried that this kind of problem may happen again.

Logs

60930077

08:16:31.729 ERROR [sync-handle-block] [DB](Manager.java:1323) Validate TransferContract error, balance is not sufficient.
org.tron.core.exception.ContractValidateException: Validate TransferContract error, balance is not sufficient.
    at org.tron.core.actuator.TransferActuator.validate(TransferActuator.java:162)
    at org.tron.common.runtime.RuntimeImpl.execute(RuntimeImpl.java:63)
    at org.tron.core.db.TransactionTrace.exec(TransactionTrace.java:189)
    at org.tron.core.db.Manager.processTransaction(Manager.java:1455)
    at org.tron.core.db.Manager.processBlock(Manager.java:1745)
    at org.tron.core.db.Manager.applyBlock(Manager.java:1029)
    at org.tron.core.db.Manager.pushBlock(Manager.java:1315)
    at org.tron.core.net.TronNetDelegate.processBlock(TronNetDelegate.java:266)
    at org.tron.core.net.service.sync.SyncService.processSyncBlock(SyncService.java:304)
    at org.tron.core.net.service.sync.SyncService.lambda$handleSyncBlock$9(SyncService.java:290)
    at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
    at org.tron.core.net.service.sync.SyncService.handleSyncBlock(SyncService.java:269)
    at org.tron.core.net.service.sync.SyncService.lambda$init$1(SyncService.java:88)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
08:16:31.731 INFO  [sync-handle-block] [DB](PendingManager.java:57) Pending tx size: 0.
08:16:31.731 ERROR [sync-handle-block] [net](TronNetDelegate.java:295) Process block failed, Num:60930077,ID:0000000003a1b81daef570903b07c6d24a9e2dfbbae599021e3cef77f5ad21fc, reason: Validate TransferContract error, balance is not sufficient.
08:16:31.731 ERROR [sync-handle-block] [net](SyncService.java:307) Process sync block Num:60930077,ID:0000000003a1b81daef570903b07c6d24a9e2dfbbae599021e3cef77f5ad21fc failed, type: 10, bad block

61022527

06:01:50.498 ERROR [sync-handle-block] [DB](Manager.java:1323) frozenBalance must be less than or equal to accountBalance
org.tron.core.exception.ContractValidateException: frozenBalance must be less than or equal to accountBalance
    at org.tron.core.actuator.FreezeBalanceV2Actuator.validate(FreezeBalanceV2Actuator.java:140)
    at org.tron.common.runtime.RuntimeImpl.execute(RuntimeImpl.java:63)
    at org.tron.core.db.TransactionTrace.exec(TransactionTrace.java:189)
    at org.tron.core.db.Manager.processTransaction(Manager.java:1455)
    at org.tron.core.db.Manager.processBlock(Manager.java:1745)
    at org.tron.core.db.Manager.applyBlock(Manager.java:1029)
    at org.tron.core.db.Manager.pushBlock(Manager.java:1315)
    at org.tron.core.net.TronNetDelegate.processBlock(TronNetDelegate.java:266)
    at org.tron.core.net.service.sync.SyncService.processSyncBlock(SyncService.java:304)
    at org.tron.core.net.service.sync.SyncService.lambda$handleSyncBlock$9(SyncService.java:290)
    at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
    at org.tron.core.net.service.sync.SyncService.handleSyncBlock(SyncService.java:269)
    at org.tron.core.net.service.sync.SyncService.lambda$init$1(SyncService.java:88)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
06:01:50.498 INFO  [peerClient-34] [net](P2pEventHandlerImpl.java:168) Receive message from  peer: /54.255.110.243:18888, type: BLOCK
Num:61023713,ID:0000000003a325e17b142232450256ed5d79c5677e2d49f26adfda164bb3e0ca, trx size: 159

06:01:50.500 INFO  [sync-handle-block] [DB](PendingManager.java:57) Pending tx size: 0.
06:01:50.500 ERROR [sync-handle-block] [net](TronNetDelegate.java:295) Process block failed, Num:61022527,ID:0000000003a3213f447689202bb198ebbefb821151b56a75050332c5bf525fc7, reason: frozenBalance must be less than or equal to accountBalance
06:01:50.500 ERROR [sync-handle-block] [net](SyncService.java:307) Process sync block Num:61022527,ID:0000000003a3213f447689202bb198ebbefb821151b56a75050332c5bf525fc7 failed, type: 10, bad block
lxcmyf commented 4 months ago

@uiayl It is recommended to check the version of JDK. Currently, Java tron only supports JDK 1.8. If the database data has been accidentally damaged during synchronization, upgrading the version is useless. In this case, only the nearest database before the data damage needs to be replaced.

lxcmyf commented 4 months ago

@uiayl Has your problem been resolved? If not resolved, please provide a detailed description of your synchronization steps.

vivian1912 commented 3 months ago

Thanks for all your contribution to java-tron, this issue will be closed as no update for a long time. Please feel free to re-open it if you still see the issue, thanks.

surname0990 commented 2 months ago

On starting this container, the block has stopped syncing. Use the default config - main_net_config.conf.

I ran both LiteFullNode and FullNode, and the result is the same: it stops at a certain block and fails validation.

I installed snapshots for each node version in different directories (FullNode, LiteFullNode):

FullNode Snapshot - https://db-backup-frankurt.s3-eu-central-1.amazonaws.com/FullNode-62646271-4.7.5-output-directory.tgz LiteFullNode Snapshot - http://3.219.199.168/backup20240619/LiteFullNode_output-directory.tgz

podman run -d --name="java-tron" -v /mnt/vol1/data_full:/java-tron/output-directory -v /mnt/volume_lon1_02/logs:/java-tron/logs -p 8090:8090 -p 18888:18888 -p 18888:18888/udp -p 50051:50051 docker.io/tronprotocol/java-tron:GreatVoyage-v4.7.5 -jvm "{-Xmx27g -Xms27g}"

java -version openjdk version "1.8.0_412" OpenJDK Runtime Environment (build 1.8.0_412-8u412-ga-1~22.04.1-b08) OpenJDK 64-Bit Server VM (build 25.412-b08, mixed mode)

Error: 19:44:00.519 INFO [peerClient-14] net Receive message from peer: /185.18.53.58:18888, type: BLOCK_CHAIN_INVENTORY size: 2001, first blockId: Num:62729235,ID:0000000003bd2c13d4e80fa413b4253283047138120806d95b03066dcc68df6c, end blockId: Num:62731235,ID:0000000003bd33e36446ef6010b2aa78e4ee537f9cde39adca6063976c7ccf8c, remain_num: 111186 19:44:00.764 INFO [sync-fetch-block] net Send peer /185.18.53.58:18888 message type: FETCH_INV_DATA invType: BLOCK, size: 1, First hash: 0000000003bd2444e2e73fbb80b161ae85a9b5b42986287e1448dfaa60e4f94c 19:44:00.786 INFO [peerClient-14] net Receive message from peer: /185.18.53.58:18888, type: BLOCK Num:62727236,ID:0000000003bd2444e2e73fbb80b161ae85a9b5b42986287e1448dfaa60e4f94c, trx size: 177

19:44:01.684 INFO [sync-handle-block] DB Block num: 62727236, re-push-size: 0, pending-size: 0, block-tx-size: 177, verify-tx-size: 177 19:44:01.764 INFO [sync-fetch-block] net Send peer /185.18.53.58:18888 message type: FETCH_INV_DATA invType: BLOCK, size: 100, First hash: 0000000003bd24451714377105db30ad1665da8dae93e64a7ab1bfdebe2d4fc0, End hash: 0000000003bd24e156aecf8ccbd3526f5817868c65516ad6046637c092caaf74 19:44:01.766 WARN [sync-handle-block] actuator Balance is not sufficient. Account: TPP6C5f1Hf6VVzawRZL9oW8Ptjp2KfLrUD, balance: 2165510, amount: 1165726, fee: 1000000. 19:44:01.766 ERROR [sync-handle-block] DB Validate TransferContract error, balance is not sufficient. org.tron.core.exception.ContractValidateException: Validate TransferContract error, balance is not sufficient. at org.tron.core.actuator.TransferActuator.validate(TransferActuator.java:162) at org.tron.common.runtime.RuntimeImpl.execute(RuntimeImpl.java:63) at org.tron.core.db.TransactionTrace.exec(TransactionTrace.java:189) at org.tron.core.db.Manager.processTransaction(Manager.java:1450) at org.tron.core.db.Manager.processBlock(Manager.java:1740) at org.tron.core.db.Manager.applyBlock(Manager.java:1024) at org.tron.core.db.Manager.pushBlock(Manager.java:1310) at org.tron.core.net.TronNetDelegate.processBlock(TronNetDelegate.java:260) at org.tron.core.net.service.sync.SyncService.processSyncBlock(SyncService.java:304) at org.tron.core.net.service.sync.SyncService.lambda$handleSyncBlock$9(SyncService.java:290) at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597) at org.tron.core.net.service.sync.SyncService.handleSyncBlock(SyncService.java:269) at org.tron.core.net.service.sync.SyncService.lambda$init$1(SyncService.java:88) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19:44:01.766 INFO [sync-handle-block] DB Pending tx size: 0. 19:44:01.766 ERROR [sync-handle-block] net Process block failed, Num:62727236,ID:0000000003bd2444e2e73fbb80b161ae85a9b5b42986287e1448dfaa60e4f94c, reason: Validate TransferContract error, balance is not sufficient. 19:44:01.766 ERROR [sync-handle-block] net Process sync block Num:62727236,ID:0000000003bd2444e2e73fbb80b161ae85a9b5b42986287e1448dfaa60e4f94c failed, type: 10, bad block 19:44:01.766 INFO [sync-handle-block] net Send peer /185.18.53.58:18888 message type: P2P_DISCONNECT reason: BAD_BLOCK