hyperledger-labs / minifabric

Do fabric network the right and easy way.
Apache License 2.0
301 stars 166 forks source link

minifab error when creating channel #77

Closed alirezaght closed 3 years ago

alirezaght commented 3 years ago

Hi, I get following error when i try to bring up the network and create a channel with "-e true" parameter. if i remove the "-e true" parameter then everything works fine.

minifab up -e true -o org1.example.com -c test


Minifab Execution Context:
    FABRIC_RELEASE=2.2.0
    CHANNEL_NAME=test
    PEER_DATABASE_TYPE=golevel
    CHAINCODE_LANGUAGE=go
    CHAINCODE_NAME=simple
    CHAINCODE_VERSION=1.0
    CHAINCODE_INIT_REQUIRED=true
    CHAINCODE_PARAMETERS="init","a","200","b","300"
    CHAINCODE_PRIVATE=false
    CHAINCODE_POLICY=
    TRANSIENT_DATA=
    BLOCK_NUMBER=newest
    EXPOSE_ENDPOINTS=true
    CURRENT_ORG=org1.example.com
    HOST_ADDRESSES=37.152.180.255
    WORKING_DIRECTORY: /home/ubuntu/hplatform
...
# Preparing for the following operations: *********************
  verify options, download images, generate certificates, start network, network status, channel create, channel join, anchor update, profile generation, cc install, cc approve, cc commit, cc initialize, discover
.................
# Running operation: ******************************************
  verify options
.
# Running operation: ******************************************
  download images
............
# Running operation: ******************************************
  generate certificates
...................................................................................
# Running operation: ******************************************
  start network
.......................
# Running operation: ******************************************
  network status
......
# Docker node status ******************************************
  bb2d1acb52_cli : Up 7 seconds
  ca.org1.example.com : Up 9 seconds
  orderer.example.com : Up 10 seconds
  peer2.org1.example.com : Up 11 seconds
  peer1.org1.example.com : Up 13 seconds

# Fabric network peer and orderer node health status **********
  peer1.org1.example.com "OK"
  peer2.org1.example.com "OK"
   orderer.example.com "OK"

# Running operation: ******************************************
  channel create
.....
# Run the channel creation script on cli container ************
  non-zero return code
  2020-09-23 08:14:05.500 UTC [common.tools.configtxgen] main -> INFO 001 Loading configuration
  2020-09-23 08:14:05.530 UTC [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /vars/configtx.yaml
  2020-09-23 08:14:05.530 UTC [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 003 Generating new channel configtx
  2020-09-23 08:14:05.533 UTC [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 004 Writing new channel tx
  Error: failed to create deliver client for orderer: orderer client failed to connect to 37.152.180.255:7003: failed to create new connection: context deadline exceeded

# STATS *******************************************************
minifab: ok=180 failed=1

my spec.yaml file:

fabric:
  cas:
  - "ca.org1.example.com"
  peers:
  - "peer1.org1.example.com"
  - "peer2.org1.example.com"
  orderers:
  - "orderer.example.com"
  settings:
    ca:
      FABRIC_LOGGING_SPEC: INFO
    peer:
      FABRIC_LOGGING_SPEC: INFO
    orderer:
      FABRIC_LOGGING_SPEC: INFO
litong01 commented 3 years ago

Check order logs to see if you have any errors. Also see if your port 7003 was occupied by other apps

litong01 commented 3 years ago

When use -e true flag, by default minifabric uses host ports starting 7000, then allocating ports for nodes like peers, orderers. If your port such as 7003 happens to be already occupied by other apps, minifabric would not know. So you can try to use this command to use a different set of ports

-e 7100

This flag tells minifabric to use ports starting at 7100 to allocating 7100, 7101, 7102, so on for nodes, this might have avoided ports conflict. You can give it a try after you do minifab cleanup.

alirezaght commented 3 years ago

no other application is using that port. as you can see here:

root@node-1:/home/ubuntu/hplatform# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1228/sshd       
tcp6       0      0 :::22                   :::*                    LISTEN      1228/sshd  

I tried with -e 7100:

root@node-1:/home/ubuntu/hplatform# minifab up -e 7100 -o org1.example.com -c test
Minifab Execution Context:
    FABRIC_RELEASE=2.2.0
    CHANNEL_NAME=test
    PEER_DATABASE_TYPE=golevel
    CHAINCODE_LANGUAGE=go
    CHAINCODE_NAME=simple
    CHAINCODE_VERSION=1.0
    CHAINCODE_INIT_REQUIRED=true
    CHAINCODE_PARAMETERS="init","a","200","b","300"
    CHAINCODE_PRIVATE=false
    CHAINCODE_POLICY=
    TRANSIENT_DATA=
    BLOCK_NUMBER=newest
    EXPOSE_ENDPOINTS=7100
    CURRENT_ORG=org1.example.com
    HOST_ADDRESSES=37.152.180.255
    WORKING_DIRECTORY: /home/ubuntu/hplatform
...
# Preparing for the following operations: *********************
  verify options, download images, generate certificates, start network, network status, channel create, channel join, anchor update, profile generation, cc install, cc approve, cc commit, cc initialize, discover
.................
# Running operation: ******************************************
  verify options
.
# Running operation: ******************************************
  download images
............
# Running operation: ******************************************
  generate certificates
...................................................................................
# Running operation: ******************************************
  start network
.......................
# Running operation: ******************************************
  network status
......
# Docker node status ******************************************
  bb2d1acb52_cli : Up 8 seconds
  ca.org1.example.com : Up 9 seconds
  orderer.example.com : Up 11 seconds
  peer2.org1.example.com : Up 12 seconds
  peer1.org1.example.com : Up 14 seconds

# Fabric network peer and orderer node health status **********
  peer1.org1.example.com "OK"
  peer2.org1.example.com "OK"
   orderer.example.com "OK"

# Running operation: ******************************************
  channel create
.....
# Run the channel creation script on cli container ************
  non-zero return code
  2020-09-23 13:43:16.647 UTC [common.tools.configtxgen] main -> INFO 001 Loading configuration
  2020-09-23 13:43:16.686 UTC [common.tools.configtxgen.localconfig] Load -> INFO 002 Loaded configuration: /vars/configtx.yaml
  2020-09-23 13:43:16.686 UTC [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 003 Generating new channel configtx
  2020-09-23 13:43:16.691 UTC [common.tools.configtxgen] doOutputChannelCreateTx -> INFO 004 Writing new channel tx
  Error: failed to create deliver client for orderer: orderer client failed to connect to 37.152.180.255:7103: failed to create new connection: context deadline exceeded

# STATS *******************************************************
minifab: ok=180 failed=1    

real    1m48.915s
user    1m21.868s
sys 0m20.472s

here is my docker ps result:

root@node-1:/home/ubuntu/hplatform# docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                                            NAMES
889a3fbe1a7d        hyperledger/fabric-tools:2.2.0     "/bin/sh"                7 minutes ago       Up 7 minutes                                                         bb2d1acb52_cli
859e4fe06882        hyperledger/fabric-ca:1.4          "/bin/sh -c 'fabric-…"   7 minutes ago       Up 7 minutes        0.0.0.0:7100->7054/tcp                           ca.org1.example.com
36f31f71f033        hyperledger/fabric-orderer:2.2.0   "orderer"                7 minutes ago       Up 7 minutes        0.0.0.0:7103->7050/tcp, 0.0.0.0:8103->7060/tcp   orderer.example.com
8226164b93be        hyperledger/fabric-peer:2.2.0      "peer node start"        7 minutes ago       Up 7 minutes        0.0.0.0:7102->7051/tcp, 0.0.0.0:8102->7061/tcp   peer2.org1.example.com
52b909327ab7        hyperledger/fabric-peer:2.2.0      "peer node start"        7 minutes ago       Up 7 minutes        0.0.0.0:7101->7051/tcp, 0.0.0.0:8101->7061/tcp   peer1.org1.example.com

this is the orderer log:

root@node-1:/home/ubuntu/hplatform# docker logs -f 36f31f71f033
2020-09-23 13:43:01.638 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
2020-09-23 13:43:01.638 UTC [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
    General.ListenAddress = "0.0.0.0"
    General.ListenPort = 7050
    General.TLS.Enabled = true
    General.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
    General.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
    General.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
    General.TLS.ClientAuthRequired = false
    General.TLS.ClientRootCAs = []
    General.Cluster.ListenAddress = ""
    General.Cluster.ListenPort = 0
    General.Cluster.ServerCertificate = ""
    General.Cluster.ServerPrivateKey = ""
    General.Cluster.ClientCertificate = "/var/hyperledger/orderer/tls/server.crt"
    General.Cluster.ClientPrivateKey = "/var/hyperledger/orderer/tls/server.key"
    General.Cluster.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
    General.Cluster.DialTimeout = 5s
    General.Cluster.RPCTimeout = 7s
    General.Cluster.ReplicationBufferSize = 20971520
    General.Cluster.ReplicationPullTimeout = 5s
    General.Cluster.ReplicationRetryTimeout = 5s
    General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
    General.Cluster.ReplicationMaxRetries = 12
    General.Cluster.SendBufferSize = 10
    General.Cluster.CertExpirationWarningThreshold = 168h0m0s
    General.Cluster.TLSHandshakeTimeShift = 0s
    General.Keepalive.ServerMinInterval = 1m0s
    General.Keepalive.ServerInterval = 2h0m0s
    General.Keepalive.ServerTimeout = 20s
    General.ConnectionTimeout = 0s
    General.GenesisMethod = "file"
    General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
    General.BootstrapMethod = "file"
    General.BootstrapFile = "/var/hyperledger/orderer/orderer.genesis.block"
    General.Profile.Enabled = false
    General.Profile.Address = "0.0.0.0:6060"
    General.LocalMSPDir = "/var/hyperledger/orderer/msp"
    General.LocalMSPID = "example-com"
    General.BCCSP.ProviderName = "SW"
    General.BCCSP.SwOpts.SecLevel = 256
    General.BCCSP.SwOpts.HashFamily = "SHA2"
    General.BCCSP.SwOpts.Ephemeral = true
    General.BCCSP.SwOpts.FileKeystore.KeyStorePath = ""
    General.BCCSP.SwOpts.DummyKeystore =
    General.BCCSP.SwOpts.InmemKeystore =
    General.Authentication.TimeWindow = 15m0s
    General.Authentication.NoExpirationChecks = false
    FileLedger.Location = "/var/hyperledger/production/orderer"
    FileLedger.Prefix = "hyperledger-fabric-ordererledger"
    Kafka.Retry.ShortInterval = 5s
    Kafka.Retry.ShortTotal = 10m0s
    Kafka.Retry.LongInterval = 5m0s
    Kafka.Retry.LongTotal = 12h0m0s
    Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
    Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
    Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
    Kafka.Retry.Metadata.RetryMax = 3
    Kafka.Retry.Metadata.RetryBackoff = 250ms
    Kafka.Retry.Producer.RetryMax = 3
    Kafka.Retry.Producer.RetryBackoff = 100ms
    Kafka.Retry.Consumer.RetryBackoff = 2s
    Kafka.Verbose = false
    Kafka.Version = 0.10.2.0
    Kafka.TLS.Enabled = false
    Kafka.TLS.PrivateKey = ""
    Kafka.TLS.Certificate = ""
    Kafka.TLS.RootCAs = []
    Kafka.TLS.ClientAuthRequired = false
    Kafka.TLS.ClientRootCAs = []
    Kafka.SASLPlain.Enabled = false
    Kafka.SASLPlain.User = ""
    Kafka.SASLPlain.Password = ""
    Kafka.Topic.ReplicationFactor = 3
    Debug.BroadcastTraceDir = ""
    Debug.DeliverTraceDir = ""
    Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
    Operations.ListenAddress = "orderer.example.com:7060"
    Operations.TLS.Enabled = false
    Operations.TLS.PrivateKey = ""
    Operations.TLS.Certificate = ""
    Operations.TLS.RootCAs = []
    Operations.TLS.ClientAuthRequired = false
    Operations.TLS.ClientRootCAs = []
    Metrics.Provider = "prometheus"
    Metrics.Statsd.Network = "udp"
    Metrics.Statsd.Address = "127.0.0.1:8125"
    Metrics.Statsd.WriteInterval = 30s
    Metrics.Statsd.Prefix = ""
    ChannelParticipation.Enabled = false
    ChannelParticipation.RemoveStorage = false
2020-09-23 13:43:01.659 UTC [orderer.common.server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
2020-09-23 13:43:01.698 UTC [blkstorage] NewProvider -> INFO 004 Creating new file ledger directory at /var/hyperledger/production/orderer/chains
2020-09-23 13:43:01.711 UTC [orderer.common.server] Main -> INFO 005 Bootstrapping the system channel
2020-09-23 13:43:01.712 UTC [blkstorage] newBlockfileMgr -> INFO 006 Getting block information from block storage
2020-09-23 13:43:01.749 UTC [orderer.common.server] selectClusterBootBlock -> INFO 007 Cluster boot block is bootstrap (genesis) block; Blocks Header.Number system-channel=0, bootstrap=0
2020-09-23 13:43:01.761 UTC [orderer.common.server] Main -> INFO 008 Starting with system channel: systemchannel, consensus type: etcdraft
2020-09-23 13:43:01.761 UTC [orderer.common.server] Main -> INFO 009 Setting up cluster
2020-09-23 13:43:01.761 UTC [orderer.common.server] reuseListener -> INFO 00a Cluster listener is not configured, defaulting to use the general listener on port 7050
2020-09-23 13:43:01.780 UTC [orderer.common.cluster] loadVerifier -> INFO 00b Loaded verifier for channel systemchannel from config block at index 0
2020-09-23 13:43:01.802 UTC [orderer.consensus.etcdraft] HandleChain -> INFO 00c EvictionSuspicion not set, defaulting to 10m0s
2020-09-23 13:43:01.803 UTC [orderer.consensus.etcdraft] createOrReadWAL -> INFO 00d No WAL data found, creating new WAL at path '/var/hyperledger/production/orderer/etcdraft/wal/systemchannel' channel=systemchannel node=1
2020-09-23 13:43:01.820 UTC [orderer.commmon.multichannel] Initialize -> INFO 00e Starting system channel 'systemchannel' with genesis block hash 9e4bf1cc27d7f729198963a30e20215c6f3449a3ad62e67cfc2cb43c4ef7a0e2 and orderer type etcdraft
2020-09-23 13:43:01.820 UTC [orderer.consensus.etcdraft] Start -> INFO 00f Starting Raft node channel=systemchannel node=1
2020-09-23 13:43:01.820 UTC [orderer.common.cluster] Configure -> INFO 010 Entering, channel: systemchannel, nodes: []
2020-09-23 13:43:01.821 UTC [orderer.common.cluster] Configure -> INFO 011 Exiting
2020-09-23 13:43:01.821 UTC [orderer.consensus.etcdraft] start -> INFO 012 Starting raft node as part of a new channel channel=systemchannel node=1
2020-09-23 13:43:01.821 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 013 1 became follower at term 0 channel=systemchannel node=1
2020-09-23 13:43:01.821 UTC [orderer.consensus.etcdraft] newRaft -> INFO 014 newRaft 1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] channel=systemchannel node=1
2020-09-23 13:43:01.821 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 015 1 became follower at term 1 channel=systemchannel node=1
2020-09-23 13:43:01.822 UTC [orderer.consensus.etcdraft] run -> INFO 016 This node is picked to start campaign channel=systemchannel node=1
2020-09-23 13:43:01.825 UTC [orderer.common.server] Main -> INFO 017 Starting orderer:
 Version: 2.2.0
 Commit SHA: 5ea85bc
 Go version: go1.14.4
 OS/Arch: linux/amd64
2020-09-23 13:43:01.825 UTC [orderer.common.server] Main -> INFO 018 Beginning to serve requests
2020-09-23 13:43:01.829 UTC [orderer.consensus.etcdraft] apply -> INFO 019 Applied config change to add node 1, current nodes in channel: [1] channel=systemchannel node=1
2020-09-23 13:43:02.822 UTC [orderer.consensus.etcdraft] Step -> INFO 01a 1 is starting a new election at term 1 channel=systemchannel node=1
2020-09-23 13:43:02.823 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 01b 1 became pre-candidate at term 1 channel=systemchannel node=1
2020-09-23 13:43:02.823 UTC [orderer.consensus.etcdraft] poll -> INFO 01c 1 received MsgPreVoteResp from 1 at term 1 channel=systemchannel node=1
2020-09-23 13:43:02.823 UTC [orderer.consensus.etcdraft] becomeCandidate -> INFO 01d 1 became candidate at term 2 channel=systemchannel node=1
2020-09-23 13:43:02.823 UTC [orderer.consensus.etcdraft] poll -> INFO 01e 1 received MsgVoteResp from 1 at term 2 channel=systemchannel node=1
2020-09-23 13:43:02.823 UTC [orderer.consensus.etcdraft] becomeLeader -> INFO 01f 1 became leader at term 2 channel=systemchannel node=1
2020-09-23 13:43:02.823 UTC [orderer.consensus.etcdraft] run -> INFO 020 raft.node: 1 elected leader 1 at term 2 channel=systemchannel node=1
2020-09-23 13:43:02.829 UTC [orderer.consensus.etcdraft] run -> INFO 021 Raft leader changed: 0 -> 1 channel=systemchannel node=1
2020-09-23 13:43:02.829 UTC [orderer.consensus.etcdraft] run -> INFO 022 Start accepting requests as Raft leader at block [0] channel=systemchannel node=1
2020-09-23 13:43:02.829 UTC [orderer.consensus.etcdraft] run -> INFO 023 Leader 1 is present, quit campaign channel=systemchannel node=1

and this is one of the peers log:

root@node-1:/home/ubuntu/hplatform# docker logs -f 8226164b93be
2020-09-23 13:43:00.155 UTC [nodeCmd] serve -> INFO 001 Starting peer:
 Version: 2.2.0
 Commit SHA: 5ea85bc
 Go version: go1.14.4
 OS/Arch: linux/amd64
 Chaincode:
  Base Docker Label: org.hyperledger.fabric
  Docker Namespace: hyperledger
2020-09-23 13:43:00.156 UTC [peer] getLocalAddress -> INFO 002 Auto-detected peer address: 172.29.0.3:7051
2020-09-23 13:43:00.157 UTC [peer] getLocalAddress -> INFO 003 Returning peer2.org1.example.com:7051
2020-09-23 13:43:00.170 UTC [nodeCmd] initGrpcSemaphores -> INFO 004 concurrency limit for endorser service is 2500
2020-09-23 13:43:00.170 UTC [nodeCmd] initGrpcSemaphores -> INFO 005 concurrency limit for deliver service is 2500
2020-09-23 13:43:00.170 UTC [nodeCmd] serve -> INFO 006 Starting peer with TLS enabled
2020-09-23 13:43:00.216 UTC [ledgermgmt] NewLedgerMgr -> INFO 007 Initializing LedgerMgr
2020-09-23 13:43:00.284 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 008 DB is empty Setting db format as 2.0
2020-09-23 13:43:00.292 UTC [blkstorage] NewProvider -> INFO 009 Creating new file ledger directory at /var/hyperledger/production/ledgersData/chains/chains
2020-09-23 13:43:00.348 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00a DB is empty Setting db format as 2.0
2020-09-23 13:43:00.418 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00b DB is empty Setting db format as 2.0
2020-09-23 13:43:00.430 UTC [ledgermgmt] NewLedgerMgr -> INFO 00c Initialized LedgerMgr
2020-09-23 13:43:00.443 UTC [gossip.service] New -> INFO 00d Initialize gossip with endpoint peer2.org1.example.com:7051
2020-09-23 13:43:00.446 UTC [gossip.gossip] New -> INFO 00e Creating gossip service with self membership of Endpoint: 37.152.180.255:7102, InternalEndpoint: peer2.org1.example.com:7051, PKI-ID: 9b5d8192ef96fd0d59908bfcfc2fe9b21f90cba245007718f2a7eabba4c2382d, Metadata: 
2020-09-23 13:43:00.447 UTC [gossip.gossip] start -> INFO 00f Gossip instance peer2.org1.example.com:7051 started
2020-09-23 13:43:00.451 UTC [lifecycle] InitializeLocalChaincodes -> INFO 010 Initialized lifecycle cache with 0 already installed chaincodes
2020-09-23 13:43:00.452 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 011 Entering computeChaincodeEndpoint with peerHostname: peer2.org1.example.com
2020-09-23 13:43:00.452 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 012 Exit with ccEndpoint: peer2.org1.example.com:7052
2020-09-23 13:43:00.461 UTC [sccapi] DeploySysCC -> INFO 013 deploying system chaincode 'lscc'
2020-09-23 13:43:00.462 UTC [sccapi] DeploySysCC -> INFO 014 deploying system chaincode 'cscc'
2020-09-23 13:43:00.462 UTC [sccapi] DeploySysCC -> INFO 015 deploying system chaincode 'qscc'
2020-09-23 13:43:00.462 UTC [sccapi] DeploySysCC -> INFO 016 deploying system chaincode '_lifecycle'
2020-09-23 13:43:00.463 UTC [nodeCmd] serve -> INFO 017 Deployed system chaincodes
2020-09-23 13:43:00.463 UTC [discovery] NewService -> INFO 018 Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
2020-09-23 13:43:00.463 UTC [nodeCmd] registerDiscoveryService -> INFO 019 Discovery service activated
2020-09-23 13:43:00.464 UTC [nodeCmd] serve -> INFO 01a Starting peer with ID=[peer2.org1.example.com], network ID=[dev], address=[peer2.org1.example.com:7051]
2020-09-23 13:43:00.464 UTC [nodeCmd] serve -> INFO 01b Started peer with ID=[peer2.org1.example.com], network ID=[dev], address=[peer2.org1.example.com:7051]
2020-09-23 13:43:00.464 UTC [kvledger] LoadPreResetHeight -> INFO 01c Loading prereset height from path [/var/hyperledger/production/ledgersData/chains]
2020-09-23 13:43:00.465 UTC [blkstorage] preResetHtFiles -> INFO 01d No active channels passed
2020-09-23 13:43:03.450 UTC [gossip.discovery] func1 -> WARN 01e Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:43:31.453 UTC [gossip.discovery] func1 -> WARN 01f Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:43:59.458 UTC [gossip.discovery] func1 -> WARN 020 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:44:27.464 UTC [gossip.discovery] func1 -> WARN 021 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:44:55.469 UTC [gossip.discovery] func1 -> WARN 022 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:45:23.471 UTC [gossip.discovery] func1 -> WARN 023 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:45:51.473 UTC [gossip.discovery] func1 -> WARN 024 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:46:19.476 UTC [gossip.discovery] func1 -> WARN 025 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:46:47.478 UTC [gossip.discovery] func1 -> WARN 026 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:47:15.480 UTC [gossip.discovery] func1 -> WARN 027 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:47:43.483 UTC [gossip.discovery] func1 -> WARN 028 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:48:11.485 UTC [gossip.discovery] func1 -> WARN 029 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:48:39.487 UTC [gossip.discovery] func1 -> WARN 02a Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:49:07.490 UTC [gossip.discovery] func1 -> WARN 02b Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:49:35.492 UTC [gossip.discovery] func1 -> WARN 02c Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:50:03.495 UTC [gossip.discovery] func1 -> WARN 02d Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:50:31.497 UTC [gossip.discovery] func1 -> WARN 02e Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:50:59.501 UTC [gossip.discovery] func1 -> WARN 02f Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:51:27.505 UTC [gossip.discovery] func1 -> WARN 030 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:51:55.507 UTC [gossip.discovery] func1 -> WARN 031 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:52:23.511 UTC [gossip.discovery] func1 -> WARN 032 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:52:51.515 UTC [gossip.discovery] func1 -> WARN 033 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:53:19.518 UTC [gossip.discovery] func1 -> WARN 034 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:53:47.520 UTC [gossip.discovery] func1 -> WARN 035 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:54:15.523 UTC [gossip.discovery] func1 -> WARN 036 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:54:43.525 UTC [gossip.discovery] func1 -> WARN 037 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 13:55:11.528 UTC [gossip.discovery] func1 -> WARN 038 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
alirezaght commented 3 years ago

I changed the FABRIC_LOGGING_SPEC to DEBUG, everything in orderer is ok and there were no error. but here is the log of one of my peers:

2020-09-23 14:03:54.289 UTC [bccsp] GetDefault -> DEBU 001 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2020-09-23 14:03:54.309 UTC [bccsp] GetDefault -> DEBU 002 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2020-09-23 14:03:54.316 UTC [bccsp_sw] openKeyStore -> DEBU 003 KeyStore opened at [/etc/hyperledger/fabric/msp/keystore]...done
2020-09-23 14:03:54.316 UTC [msp] getPemMaterialFromDir -> DEBU 004 Reading directory /etc/hyperledger/fabric/msp/signcerts
2020-09-23 14:03:54.316 UTC [msp] getPemMaterialFromDir -> DEBU 005 Inspecting file /etc/hyperledger/fabric/msp/signcerts/peer2.org1.example.com-cert.pem
2020-09-23 14:03:54.316 UTC [msp] getPemMaterialFromDir -> DEBU 006 Reading directory /etc/hyperledger/fabric/msp/cacerts
2020-09-23 14:03:54.316 UTC [msp] getPemMaterialFromDir -> DEBU 007 Inspecting file /etc/hyperledger/fabric/msp/cacerts/ca.org1.example.com-cert.pem
2020-09-23 14:03:54.317 UTC [msp] getPemMaterialFromDir -> DEBU 008 Reading directory /etc/hyperledger/fabric/msp/admincerts
2020-09-23 14:03:54.317 UTC [msp] getPemMaterialFromDir -> DEBU 009 Inspecting file /etc/hyperledger/fabric/msp/admincerts/Admin@org1.example.com-cert.pem
2020-09-23 14:03:54.317 UTC [msp] getPemMaterialFromDir -> DEBU 00a Reading directory /etc/hyperledger/fabric/msp/intermediatecerts
2020-09-23 14:03:54.317 UTC [msp] getMspConfig -> DEBU 00b Intermediate certs folder not found at [/etc/hyperledger/fabric/msp/intermediatecerts]. Skipping. [stat /etc/hyperledger/fabric/msp/intermediatecerts: no such file or directory]
2020-09-23 14:03:54.317 UTC [msp] getPemMaterialFromDir -> DEBU 00c Reading directory /etc/hyperledger/fabric/msp/tlscacerts
2020-09-23 14:03:54.317 UTC [msp] getPemMaterialFromDir -> DEBU 00d Inspecting file /etc/hyperledger/fabric/msp/tlscacerts/tlsca.org1.example.com-cert.pem
2020-09-23 14:03:54.318 UTC [msp] getPemMaterialFromDir -> DEBU 00e Reading directory /etc/hyperledger/fabric/msp/tlsintermediatecerts
2020-09-23 14:03:54.318 UTC [msp] getMspConfig -> DEBU 00f TLS intermediate certs folder not found at [/etc/hyperledger/fabric/msp/tlsintermediatecerts]. Skipping. [stat /etc/hyperledger/fabric/msp/tlsintermediatecerts: no such file or directory]
2020-09-23 14:03:54.318 UTC [msp] getPemMaterialFromDir -> DEBU 010 Reading directory /etc/hyperledger/fabric/msp/crls
2020-09-23 14:03:54.318 UTC [msp] getMspConfig -> DEBU 011 crls folder not found at [/etc/hyperledger/fabric/msp/crls]. Skipping. [stat /etc/hyperledger/fabric/msp/crls: no such file or directory]
2020-09-23 14:03:54.318 UTC [msp] getMspConfig -> DEBU 012 Loading NodeOUs
2020-09-23 14:03:54.319 UTC [msp] newBccspMsp -> DEBU 013 Creating BCCSP-based MSP instance
2020-09-23 14:03:54.319 UTC [msp] New -> DEBU 014 Creating Cache-MSP instance
2020-09-23 14:03:54.319 UTC [msp] loadLocalMSP -> DEBU 015 Created new local MSP
2020-09-23 14:03:54.319 UTC [msp] Setup -> DEBU 016 Setting up MSP instance org1-example-com
2020-09-23 14:03:54.320 UTC [msp.identity] newIdentity -> DEBU 017 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICeTCCAh+gAwIBAgIUMP/cRxbEV6J49XWCLdVVUh9r4bowCgYIKoZIzj0EAwIw
cTELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYDVQQH
DAdSYWxlaWdoMRkwFwYDVQQKDBBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQDDBNj
YS5vcmcxLmV4YW1wbGUuY29tMB4XDTIwMDkyMzE0MDMwOFoXDTMwMDkyMTE0MDMw
OFowcTELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYD
VQQHDAdSYWxlaWdoMRkwFwYDVQQKDBBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQD
DBNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE
4q8fWzv106MIInDsVp8uEN745OTU3JrCS11w0wgzAMQjQ87fcSoD83SCJ7S8f4KX
6i1VVTJunwxviAdX3xsUV6OBlDCBkTAdBgNVHQ4EFgQUpThL8IlY8+0NfI+d458N
VCvovAowHwYDVR0jBBgwFoAUpThL8IlY8+0NfI+d458NVCvovAowDwYDVR0TAQH/
BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAaYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsG
AQUFBwMCMA8GA1UdEQQIMAaHBCWYtP8wCgYIKoZIzj0EAwIDSAAwRQIgOEizWcbl
L4+c8pOBoBxBvJWBt2NdQ3rjaAG00BG0UNkCIQCChO7KwXcgwTKfU1mGI+wi7R4h
F2nvGJg+djUvkLvbdg==
-----END CERTIFICATE-----
2020-09-23 14:03:54.321 UTC [msp.identity] newIdentity -> DEBU 018 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICRTCCAeugAwIBAgIUKork/27a5x8uKVpaCcokCkpigpYwCgYIKoZIzj0EAwIw
cTELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYDVQQH
DAdSYWxlaWdoMRkwFwYDVQQKDBBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQDDBNj
YS5vcmcxLmV4YW1wbGUuY29tMB4XDTIwMDkyMzE0MDMzM1oXDTMwMDYyMzE0MDMz
M1owaDELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYD
VQQHDAdSYWxlaWdoMQ0wCwYDVQQLDARwZWVyMR8wHQYDVQQDDBZwZWVyMi5vcmcx
LmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExIa5mEOant0j
f2wvDHwTRtjQdeq4n7Grlur0nQnqZpQXunsmIE72LuH7MzyUoC0wPAdfCVvn6XGR
De/TbbIK+aNqMGgwDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwJwYDVR0R
BCAwHocEJZi0/4IWcGVlcjIub3JnMS5leGFtcGxlLmNvbTAfBgNVHSMEGDAWgBSl
OEvwiVjz7Q18j53jnw1UK+i8CjAKBggqhkjOPQQDAgNIADBFAiASEsxNB0LM0Fud
0AShTKcjVmCXlmrut6EmltIhj9MSnQIhAPgdOk/vmcESlGM1H+EQbVUvxF6ODPcm
ySC6xzTn/zHy
-----END CERTIFICATE-----
2020-09-23 14:03:54.322 UTC [msp.identity] newIdentity -> DEBU 019 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICRDCCAeugAwIBAgIUKork/27a5x8uKVpaCcokCkpigpYwCgYIKoZIzj0EAwIw
cTELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYDVQQH
DAdSYWxlaWdoMRkwFwYDVQQKDBBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQDDBNj
YS5vcmcxLmV4YW1wbGUuY29tMB4XDTIwMDkyMzE0MDMzM1oXDTMwMDYyMzE0MDMz
M1owaDELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYD
VQQHDAdSYWxlaWdoMQ0wCwYDVQQLDARwZWVyMR8wHQYDVQQDDBZwZWVyMi5vcmcx
LmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExIa5mEOant0j
f2wvDHwTRtjQdeq4n7Grlur0nQnqZpQXunsmIE72LuH7MzyUoC0wPAdfCVvn6XGR
De/TbbIK+aNqMGgwDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwJwYDVR0R
BCAwHocEJZi0/4IWcGVlcjIub3JnMS5leGFtcGxlLmNvbTAfBgNVHSMEGDAWgBSl
OEvwiVjz7Q18j53jnw1UK+i8CjAKBggqhkjOPQQDAgNHADBEAiASEsxNB0LM0Fud
0AShTKcjVmCXlmrut6EmltIhj9MSnQIgB+LFrxBmPu5rnMrgHu+Sqo0inB+aIHe7
0v8DjhRj818=
-----END CERTIFICATE-----
2020-09-23 14:03:54.322 UTC [msp] setupSigningIdentity -> DEBU 01a Signing identity expires at 2030-06-23 14:03:33 +0000 UTC
2020-09-23 14:03:54.331 UTC [msp.identity] newIdentity -> DEBU 01b Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICRjCCAeygAwIBAgIUKork/27a5x8uKVpaCcokCkpigpQwCgYIKoZIzj0EAwIw
cTELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYDVQQH
DAdSYWxlaWdoMRkwFwYDVQQKDBBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQDDBNj
YS5vcmcxLmV4YW1wbGUuY29tMB4XDTIwMDkyMzE0MDMxNVoXDTMwMDYyMzE0MDMx
NVowaTELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYD
VQQHDAdSYWxlaWdoMQ4wDAYDVQQLDAVhZG1pbjEfMB0GA1UEAwwWQWRtaW5Ab3Jn
MS5leGFtcGxlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABEqqbh93YUr3
mUGdlcrMAUDYlZiZXzofvSzejz7J3riOn+Hwa0OLnScCOq5ACpx4DOPIUvJHXuen
IAQGOTtCcrejajBoMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCcGA1Ud
EQQgMB6HBCWYtP+CFkFkbWluQG9yZzEuZXhhbXBsZS5jb20wHwYDVR0jBBgwFoAU
pThL8IlY8+0NfI+d458NVCvovAowCgYIKoZIzj0EAwIDSAAwRQIhAPG8jdczAyiW
vy5tZJT+7sm0kows26ESLoblj6QhgOaiAiBgYrhYrgWasCiTJn0+J9WUc1Ec+mh9
SzEKRquCSpbDNA==
-----END CERTIFICATE-----
2020-09-23 14:03:54.332 UTC [msp] hasOURole -> DEBU 01c MSP org1-example-com checking if the identity is a client
2020-09-23 14:03:54.332 UTC [msp] getCertificationChain -> DEBU 01d MSP org1-example-com getting certification chain
2020-09-23 14:03:54.332 UTC [msp] hasOURole -> DEBU 01e MSP org1-example-com checking if the identity is a client
2020-09-23 14:03:54.332 UTC [msp] getCertificationChain -> DEBU 01f MSP org1-example-com getting certification chain
2020-09-23 14:03:54.333 UTC [nodeCmd] serve -> INFO 020 Starting peer:
 Version: 2.2.0
 Commit SHA: 5ea85bc
 Go version: go1.14.4
 OS/Arch: linux/amd64
 Chaincode:
  Base Docker Label: org.hyperledger.fabric
  Docker Namespace: hyperledger
2020-09-23 14:03:54.335 UTC [peer] getLocalAddress -> INFO 021 Auto-detected peer address: 172.30.0.3:7051
2020-09-23 14:03:54.336 UTC [peer] getLocalAddress -> INFO 022 Returning peer2.org1.example.com:7051
2020-09-23 14:03:54.348 UTC [msp] GetDefaultSigningIdentity -> DEBU 023 Obtaining default signing identity
2020-09-23 14:03:54.348 UTC [msp.identity] Sign -> DEBU 024 Sign: plaintext: 00000000000000000000000000000000...00000000000000000000000000000000 
2020-09-23 14:03:54.349 UTC [msp.identity] Sign -> DEBU 025 Sign: digest: 66687AADF862BD776C8FC18B8E9F8E20089714856EE233B3902A591D0D5F2925 
2020-09-23 14:03:54.362 UTC [nodeCmd] initGrpcSemaphores -> INFO 026 concurrency limit for endorser service is 2500
2020-09-23 14:03:54.362 UTC [nodeCmd] initGrpcSemaphores -> INFO 027 concurrency limit for deliver service is 2500
2020-09-23 14:03:54.362 UTC [nodeCmd] serve -> INFO 028 Starting peer with TLS enabled
2020-09-23 14:03:54.364 UTC [kvledger.util] CreateDirIfMissing -> DEBU 029 CreateDirIfMissing [/var/hyperledger/production/transientstore/]
2020-09-23 14:03:54.365 UTC [kvledger.util] logDirStatus -> DEBU 02a Before creating dir - [/var/hyperledger/production/transientstore/] does not exist
2020-09-23 14:03:54.365 UTC [kvledger.util] logDirStatus -> DEBU 02b After creating dir - [/var/hyperledger/production/transientstore/] exists
2020-09-23 14:03:54.447 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 02c Checking for db format at path [/var/hyperledger/production/transientstore]
2020-09-23 14:03:54.447 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 02d format is latest, nothing to do
2020-09-23 14:03:54.451 UTC [msp] GetDefaultSigningIdentity -> DEBU 02e Obtaining default signing identity
2020-09-23 14:03:54.454 UTC [ledgermgmt] NewLedgerMgr -> INFO 02f Initializing LedgerMgr
2020-09-23 14:03:54.454 UTC [kvledger.util] CreateDirIfMissing -> DEBU 030 CreateDirIfMissing [/var/hyperledger/production/ledgersData/fileLock/]
2020-09-23 14:03:54.454 UTC [kvledger.util] logDirStatus -> DEBU 031 Before creating dir - [/var/hyperledger/production/ledgersData/fileLock/] does not exist
2020-09-23 14:03:54.454 UTC [kvledger.util] logDirStatus -> DEBU 032 After creating dir - [/var/hyperledger/production/ledgersData/fileLock/] exists
2020-09-23 14:03:54.483 UTC [kvledger.util] CreateDirIfMissing -> DEBU 033 CreateDirIfMissing [/var/hyperledger/production/ledgersData/ledgerProvider/]
2020-09-23 14:03:54.483 UTC [kvledger.util] logDirStatus -> DEBU 034 Before creating dir - [/var/hyperledger/production/ledgersData/ledgerProvider/] does not exist
2020-09-23 14:03:54.484 UTC [kvledger.util] logDirStatus -> DEBU 035 After creating dir - [/var/hyperledger/production/ledgersData/ledgerProvider/] exists
2020-09-23 14:03:54.530 UTC [kvledger.util] CreateDirIfMissing -> DEBU 036 CreateDirIfMissing [/var/hyperledger/production/ledgersData/chains/index/]
2020-09-23 14:03:54.530 UTC [kvledger.util] logDirStatus -> DEBU 037 Before creating dir - [/var/hyperledger/production/ledgersData/chains/index/] does not exist
2020-09-23 14:03:54.531 UTC [kvledger.util] logDirStatus -> DEBU 038 After creating dir - [/var/hyperledger/production/ledgersData/chains/index/] exists
2020-09-23 14:03:54.564 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 039 DB is empty Setting db format as 2.0
2020-09-23 14:03:54.576 UTC [blkstorage] NewProvider -> INFO 03a Creating new file ledger directory at /var/hyperledger/production/ledgersData/chains/chains
2020-09-23 14:03:54.577 UTC [kvledger.util] CreateDirIfMissing -> DEBU 03b CreateDirIfMissing [/var/hyperledger/production/ledgersData/pvtdataStore/]
2020-09-23 14:03:54.577 UTC [kvledger.util] logDirStatus -> DEBU 03c Before creating dir - [/var/hyperledger/production/ledgersData/pvtdataStore/] does not exist
2020-09-23 14:03:54.577 UTC [kvledger.util] logDirStatus -> DEBU 03d After creating dir - [/var/hyperledger/production/ledgersData/pvtdataStore/] exists
2020-09-23 14:03:54.612 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 03e Checking for db format at path [/var/hyperledger/production/ledgersData/pvtdataStore]
2020-09-23 14:03:54.613 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 03f format is latest, nothing to do
2020-09-23 14:03:54.613 UTC [history] NewDBProvider -> DEBU 040 constructing HistoryDBProvider dbPath=/var/hyperledger/production/ledgersData/historyLeveldb
2020-09-23 14:03:54.613 UTC [kvledger.util] CreateDirIfMissing -> DEBU 041 CreateDirIfMissing [/var/hyperledger/production/ledgersData/historyLeveldb/]
2020-09-23 14:03:54.614 UTC [kvledger.util] logDirStatus -> DEBU 042 Before creating dir - [/var/hyperledger/production/ledgersData/historyLeveldb/] does not exist
2020-09-23 14:03:54.614 UTC [kvledger.util] logDirStatus -> DEBU 043 After creating dir - [/var/hyperledger/production/ledgersData/historyLeveldb/] exists
2020-09-23 14:03:54.650 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 044 DB is empty Setting db format as 2.0
2020-09-23 14:03:54.662 UTC [confighistory] newDBProvider -> DEBU 045 Opening db for config history: db path = /var/hyperledger/production/ledgersData/configHistory
2020-09-23 14:03:54.663 UTC [kvledger.util] CreateDirIfMissing -> DEBU 046 CreateDirIfMissing [/var/hyperledger/production/ledgersData/configHistory/]
2020-09-23 14:03:54.663 UTC [kvledger.util] logDirStatus -> DEBU 047 Before creating dir - [/var/hyperledger/production/ledgersData/configHistory/] does not exist
2020-09-23 14:03:54.663 UTC [kvledger.util] logDirStatus -> DEBU 048 After creating dir - [/var/hyperledger/production/ledgersData/configHistory/] exists
2020-09-23 14:03:54.697 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 049 Checking for db format at path [/var/hyperledger/production/ledgersData/configHistory]
2020-09-23 14:03:54.697 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 04a format is latest, nothing to do
2020-09-23 14:03:54.697 UTC [kvledger.util] CreateDirIfMissing -> DEBU 04b CreateDirIfMissing [/var/hyperledger/production/ledgersData/bookkeeper/]
2020-09-23 14:03:54.698 UTC [kvledger.util] logDirStatus -> DEBU 04c Before creating dir - [/var/hyperledger/production/ledgersData/bookkeeper/] does not exist
2020-09-23 14:03:54.698 UTC [kvledger.util] logDirStatus -> DEBU 04d After creating dir - [/var/hyperledger/production/ledgersData/bookkeeper/] exists
2020-09-23 14:03:54.736 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 04e Checking for db format at path [/var/hyperledger/production/ledgersData/bookkeeper]
2020-09-23 14:03:54.737 UTC [leveldbhelper] openDBAndCheckFormat -> DEBU 04f format is latest, nothing to do
2020-09-23 14:03:54.737 UTC [stateleveldb] NewVersionedDBProvider -> DEBU 050 constructing VersionedDBProvider dbPath=/var/hyperledger/production/ledgersData/stateLeveldb
2020-09-23 14:03:54.737 UTC [kvledger.util] CreateDirIfMissing -> DEBU 051 CreateDirIfMissing [/var/hyperledger/production/ledgersData/stateLeveldb/]
2020-09-23 14:03:54.738 UTC [kvledger.util] logDirStatus -> DEBU 052 Before creating dir - [/var/hyperledger/production/ledgersData/stateLeveldb/] does not exist
2020-09-23 14:03:54.738 UTC [kvledger.util] logDirStatus -> DEBU 053 After creating dir - [/var/hyperledger/production/ledgersData/stateLeveldb/] exists
2020-09-23 14:03:54.771 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 054 DB is empty Setting db format as 2.0
2020-09-23 14:03:54.784 UTC [kvledger] recoverUnderConstructionLedger -> DEBU 055 Recovering under construction ledger
2020-09-23 14:03:54.785 UTC [kvledger] recoverUnderConstructionLedger -> DEBU 056 No under construction ledger found. Quitting recovery
2020-09-23 14:03:54.793 UTC [ledgermgmt] NewLedgerMgr -> INFO 057 Initialized LedgerMgr
2020-09-23 14:03:54.806 UTC [gossip.service] New -> INFO 058 Initialize gossip with endpoint peer2.org1.example.com:7051
2020-09-23 14:03:54.806 UTC [msp] DeserializeIdentity -> DEBU 059 Obtaining identity
2020-09-23 14:03:54.807 UTC [msp.identity] newIdentity -> DEBU 05a Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICRDCCAeugAwIBAgIUKork/27a5x8uKVpaCcokCkpigpYwCgYIKoZIzj0EAwIw
cTELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYDVQQH
DAdSYWxlaWdoMRkwFwYDVQQKDBBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQDDBNj
YS5vcmcxLmV4YW1wbGUuY29tMB4XDTIwMDkyMzE0MDMzM1oXDTMwMDYyMzE0MDMz
M1owaDELMAkGA1UEBhMCVVMxFzAVBgNVBAgMDk5vcnRoIENhcm9saW5hMRAwDgYD
VQQHDAdSYWxlaWdoMQ0wCwYDVQQLDARwZWVyMR8wHQYDVQQDDBZwZWVyMi5vcmcx
LmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExIa5mEOant0j
f2wvDHwTRtjQdeq4n7Grlur0nQnqZpQXunsmIE72LuH7MzyUoC0wPAdfCVvn6XGR
De/TbbIK+aNqMGgwDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwJwYDVR0R
BCAwHocEJZi0/4IWcGVlcjIub3JnMS5leGFtcGxlLmNvbTAfBgNVHSMEGDAWgBSl
OEvwiVjz7Q18j53jnw1UK+i8CjAKBggqhkjOPQQDAgNHADBEAiASEsxNB0LM0Fud
0AShTKcjVmCXlmrut6EmltIhj9MSnQIgB+LFrxBmPu5rnMrgHu+Sqo0inB+aIHe7
0v8DjhRj818=
-----END CERTIFICATE-----
2020-09-23 14:03:54.808 UTC [msp] Validate -> DEBU 05b MSP org1-example-com validating identity
2020-09-23 14:03:54.808 UTC [msp] getCertificationChain -> DEBU 05c MSP org1-example-com getting certification chain
2020-09-23 14:03:54.809 UTC [gossip.discovery] periodicalSendAlive -> DEBU 05d Sleeping 5s
2020-09-23 14:03:54.809 UTC [gossip.discovery] periodicalReconnectToDead -> DEBU 05e Sleeping 25s
2020-09-23 14:03:54.810 UTC [gossip.gossip] New -> INFO 05f Creating gossip service with self membership of Endpoint: 37.152.180.255:7102, InternalEndpoint: peer2.org1.example.com:7051, PKI-ID: 2bef510422e927a1687fe8444e048a60a8b07eff8393cff6f473bb7f85ddaff2, Metadata: 
2020-09-23 14:03:54.811 UTC [msp.identity] Sign -> DEBU 060 Sign: plaintext: 1801AA0189070A202BEF510422E927A1...455254494649434154452D2D2D2D2D0A 
2020-09-23 14:03:54.811 UTC [msp.identity] Sign -> DEBU 061 Sign: digest: 5CDBF4568BCF2485BC56053697D3E5D14FA947720881363190E58D9BF5D84B13 
2020-09-23 14:03:54.811 UTC [gossip.pull] Add -> DEBU 062 Added +�Q"�'�h�DN�`��~������s��ݯ�, total items: 1
2020-09-23 14:03:54.812 UTC [lifecycle] InitializeLocalChaincodes -> INFO 063 Initialized lifecycle cache with 0 already installed chaincodes
2020-09-23 14:03:54.813 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 064 Entering computeChaincodeEndpoint with peerHostname: peer2.org1.example.com
2020-09-23 14:03:54.813 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 065 Exit with ccEndpoint: peer2.org1.example.com:7052
2020-09-23 14:03:54.816 UTC [gossip.gossip] start -> INFO 066 Gossip instance peer2.org1.example.com:7051 started
2020-09-23 14:03:54.816 UTC [gossip.gossip] syncDiscovery -> DEBU 067 Entering discovery sync with interval 4s
2020-09-23 14:03:54.816 UTC [msp.identity] Sign -> DEBU 068 Sign: plaintext: 18012AAE070A370A1333372E3135322E...455254494649434154452D2D2D2D2D0A 
2020-09-23 14:03:54.817 UTC [msp.identity] Sign -> DEBU 069 Sign: digest: 9E52CC8CC21B22CFD2055347D5514087EBCEA3BEBB8AAAD8CDDC5557DFB38C13 
2020-09-23 14:03:54.817 UTC [msp.identity] Sign -> DEBU 06a Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:03:54.817 UTC [msp.identity] Sign -> DEBU 06b Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:03:54.818 UTC [gossip.discovery] Connect -> DEBU 06c Entering Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata: 
2020-09-23 14:03:54.818 UTC [gossip.discovery] Connect -> DEBU 06d Exiting
2020-09-23 14:03:54.820 UTC [grpc] Infof -> DEBU 06e parsed scheme: ""
2020-09-23 14:03:54.822 UTC [grpc] Infof -> DEBU 06f scheme "" not registered, fallback to default scheme
2020-09-23 14:03:54.823 UTC [grpc] Infof -> DEBU 070 ccResolverWrapper: sending update to cc: {[{37.152.180.255:7101  <nil> 0 <nil>}] <nil> <nil>}
2020-09-23 14:03:54.823 UTC [grpc] Infof -> DEBU 072 ClientConn switching balancer to "pick_first"
2020-09-23 14:03:54.823 UTC [grpc] Infof -> DEBU 073 Channel switches to new LB policy "pick_first"
2020-09-23 14:03:54.823 UTC [grpc] Infof -> DEBU 074 Subchannel Connectivity change to CONNECTING
2020-09-23 14:03:54.824 UTC [grpc] Infof -> DEBU 075 Subchannel picks a new address "37.152.180.255:7101" to connect
2020-09-23 14:03:54.823 UTC [nodeCmd] serve -> DEBU 071 Running peer
2020-09-23 14:03:54.828 UTC [sccapi] DeploySysCC -> INFO 076 deploying system chaincode 'lscc'
2020-09-23 14:03:54.828 UTC [grpc] UpdateSubConnState -> DEBU 077 pickfirstBalancer: HandleSubConnStateChange: 0xc0004e6b80, {CONNECTING <nil>}
2020-09-23 14:03:54.828 UTC [grpc] Infof -> DEBU 079 Channel Connectivity change to CONNECTING
2020-09-23 14:03:54.828 UTC [sccapi] func2 -> DEBU 078 chaincode started for lscc.syscc
2020-09-23 14:03:54.828 UTC [sccapi] func1 -> DEBU 07a starting chaincode-support stream for  lscc.syscc
2020-09-23 14:03:54.829 UTC [chaincode] handleMessage -> DEBU 07b [] Fabric side handling ChaincodeMessage of type: REGISTER in state created
2020-09-23 14:03:54.829 UTC [chaincode] HandleRegister -> DEBU 07c Received REGISTER in state created
2020-09-23 14:03:54.829 UTC [chaincode] Register -> DEBU 07d registered handler complete for chaincode lscc.syscc
2020-09-23 14:03:54.829 UTC [chaincode] HandleRegister -> DEBU 07e Got REGISTER for chaincodeID = lscc.syscc, sending back REGISTERED
2020-09-23 14:03:54.830 UTC [chaincode] HandleRegister -> DEBU 07f Changed state to established for lscc.syscc
2020-09-23 14:03:54.830 UTC [chaincode] sendReady -> DEBU 080 sending READY for chaincode lscc.syscc
2020-09-23 14:03:54.830 UTC [chaincode] sendReady -> DEBU 081 Changed to state ready for chaincode lscc.syscc
2020-09-23 14:03:54.830 UTC [sccapi] DeploySysCC -> INFO 082 deploying system chaincode 'cscc'
2020-09-23 14:03:54.831 UTC [sccapi] func2 -> DEBU 083 chaincode started for cscc.syscc
2020-09-23 14:03:54.831 UTC [sccapi] func1 -> DEBU 084 starting chaincode-support stream for  cscc.syscc
2020-09-23 14:03:54.831 UTC [chaincode] handleMessage -> DEBU 085 [] Fabric side handling ChaincodeMessage of type: REGISTER in state created
2020-09-23 14:03:54.831 UTC [chaincode] HandleRegister -> DEBU 086 Received REGISTER in state created
2020-09-23 14:03:54.831 UTC [chaincode] Register -> DEBU 087 registered handler complete for chaincode cscc.syscc
2020-09-23 14:03:54.831 UTC [chaincode] HandleRegister -> DEBU 088 Got REGISTER for chaincodeID = cscc.syscc, sending back REGISTERED
2020-09-23 14:03:54.831 UTC [chaincode] HandleRegister -> DEBU 089 Changed state to established for cscc.syscc
2020-09-23 14:03:54.831 UTC [chaincode] sendReady -> DEBU 08a sending READY for chaincode cscc.syscc
2020-09-23 14:03:54.831 UTC [chaincode] sendReady -> DEBU 08b Changed to state ready for chaincode cscc.syscc
2020-09-23 14:03:54.832 UTC [sccapi] DeploySysCC -> INFO 08c deploying system chaincode 'qscc'
2020-09-23 14:03:54.832 UTC [sccapi] func2 -> DEBU 08d chaincode started for qscc.syscc
2020-09-23 14:03:54.832 UTC [sccapi] func1 -> DEBU 08e starting chaincode-support stream for  qscc.syscc
2020-09-23 14:03:54.832 UTC [chaincode] handleMessage -> DEBU 08f [] Fabric side handling ChaincodeMessage of type: REGISTER in state created
2020-09-23 14:03:54.832 UTC [chaincode] HandleRegister -> DEBU 090 Received REGISTER in state created
2020-09-23 14:03:54.833 UTC [chaincode] Register -> DEBU 091 registered handler complete for chaincode qscc.syscc
2020-09-23 14:03:54.833 UTC [chaincode] HandleRegister -> DEBU 092 Got REGISTER for chaincodeID = qscc.syscc, sending back REGISTERED
2020-09-23 14:03:54.833 UTC [chaincode] HandleRegister -> DEBU 093 Changed state to established for qscc.syscc
2020-09-23 14:03:54.833 UTC [chaincode] sendReady -> DEBU 094 sending READY for chaincode qscc.syscc
2020-09-23 14:03:54.833 UTC [chaincode] sendReady -> DEBU 095 Changed to state ready for chaincode qscc.syscc
2020-09-23 14:03:54.834 UTC [sccapi] DeploySysCC -> INFO 096 deploying system chaincode '_lifecycle'
2020-09-23 14:03:54.834 UTC [sccapi] func2 -> DEBU 097 chaincode started for _lifecycle.syscc
2020-09-23 14:03:54.834 UTC [sccapi] func1 -> DEBU 098 starting chaincode-support stream for  _lifecycle.syscc
2020-09-23 14:03:54.834 UTC [chaincode] handleMessage -> DEBU 099 [] Fabric side handling ChaincodeMessage of type: REGISTER in state created
2020-09-23 14:03:54.834 UTC [chaincode] HandleRegister -> DEBU 09a Received REGISTER in state created
2020-09-23 14:03:54.835 UTC [chaincode] Register -> DEBU 09b registered handler complete for chaincode _lifecycle.syscc
2020-09-23 14:03:54.835 UTC [chaincode] HandleRegister -> DEBU 09c Got REGISTER for chaincodeID = _lifecycle.syscc, sending back REGISTERED
2020-09-23 14:03:54.835 UTC [chaincode] HandleRegister -> DEBU 09d Changed state to established for _lifecycle.syscc
2020-09-23 14:03:54.835 UTC [chaincode] sendReady -> DEBU 09e sending READY for chaincode _lifecycle.syscc
2020-09-23 14:03:54.836 UTC [chaincode] sendReady -> DEBU 09f Changed to state ready for chaincode _lifecycle.syscc
2020-09-23 14:03:54.836 UTC [nodeCmd] serve -> INFO 0a0 Deployed system chaincodes
2020-09-23 14:03:54.836 UTC [discovery] NewService -> INFO 0a1 Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
2020-09-23 14:03:54.836 UTC [nodeCmd] registerDiscoveryService -> INFO 0a2 Discovery service activated
2020-09-23 14:03:54.837 UTC [nodeCmd] serve -> INFO 0a3 Starting peer with ID=[peer2.org1.example.com], network ID=[dev], address=[peer2.org1.example.com:7051]
2020-09-23 14:03:54.837 UTC [nodeCmd] serve -> INFO 0a4 Started peer with ID=[peer2.org1.example.com], network ID=[dev], address=[peer2.org1.example.com:7051]
2020-09-23 14:03:54.838 UTC [kvledger] LoadPreResetHeight -> INFO 0a5 Loading prereset height from path [/var/hyperledger/production/ledgersData/chains]
2020-09-23 14:03:54.838 UTC [blkstorage] LoadPreResetHeight -> DEBU 0a6 Loading Pre-reset heights
2020-09-23 14:03:54.838 UTC [blkstorage] preResetHtFiles -> INFO 0a7 No active channels passed
2020-09-23 14:03:57.820 UTC [grpc] Infof -> DEBU 0a8 Channel Connectivity change to SHUTDOWN
2020-09-23 14:03:57.820 UTC [grpc] Infof -> DEBU 0a9 Subchannel Connectivity change to SHUTDOWN
2020-09-23 14:03:57.821 UTC [grpc] Warningf -> DEBU 0aa grpc: addrConn.createTransport failed to connect to {37.152.180.255:7101  <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 37.152.180.255:7101: operation was canceled". Reconnecting...
2020-09-23 14:03:57.821 UTC [gossip.discovery] func1 -> WARN 0ab Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
2020-09-23 14:03:58.818 UTC [msp.identity] Sign -> DEBU 0ac Sign: plaintext: 18012AAE070A370A1333372E3135322E...455254494649434154452D2D2D2D2D0A 
2020-09-23 14:03:58.819 UTC [msp.identity] Sign -> DEBU 0ad Sign: digest: B969CF3FFCBFEA03CCD6FE6A2309C7C7677ABA95CADCA47AEC20CA2E7C449B9A 
2020-09-23 14:03:58.819 UTC [msp.identity] Sign -> DEBU 0ae Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:03:58.819 UTC [msp.identity] Sign -> DEBU 0af Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:03:59.810 UTC [msp.identity] Sign -> DEBU 0b0 Sign: plaintext: 18012AAE070A370A1333372E3135322E...455254494649434154452D2D2D2D2D0A 
2020-09-23 14:03:59.810 UTC [msp.identity] Sign -> DEBU 0b1 Sign: digest: 6B9D5AA15920829D91632A6428FC9B74EE0AA5196D927E00BC20E89518F452DE 
2020-09-23 14:03:59.811 UTC [msp.identity] Sign -> DEBU 0b2 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:03:59.811 UTC [msp.identity] Sign -> DEBU 0b3 Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:03:59.811 UTC [gossip.discovery] periodicalSendAlive -> DEBU 0b4 Sleeping 5s
2020-09-23 14:04:02.820 UTC [msp.identity] Sign -> DEBU 0b5 Sign: plaintext: 18012AAE070A370A1333372E3135322E...455254494649434154452D2D2D2D2D0A 
2020-09-23 14:04:02.821 UTC [msp.identity] Sign -> DEBU 0b6 Sign: digest: 698271C52546F8987495D851068334766A5B3C07F97C24DA82BB54A0B0C9C45F 
2020-09-23 14:04:02.822 UTC [msp.identity] Sign -> DEBU 0b7 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:02.822 UTC [msp.identity] Sign -> DEBU 0b8 Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:04.812 UTC [msp.identity] Sign -> DEBU 0b9 Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B161006 
2020-09-23 14:04:04.812 UTC [msp.identity] Sign -> DEBU 0ba Sign: digest: EB7E0D71BFDCA0487ACEE5A2DD5F2F5F041BD6D403FA432F38F1E14641635BD5 
2020-09-23 14:04:04.813 UTC [msp.identity] Sign -> DEBU 0bb Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:04.813 UTC [msp.identity] Sign -> DEBU 0bc Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:04.813 UTC [gossip.discovery] periodicalSendAlive -> DEBU 0bd Sleeping 5s
2020-09-23 14:04:06.823 UTC [msp.identity] Sign -> DEBU 0be Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B161007 
2020-09-23 14:04:06.823 UTC [msp.identity] Sign -> DEBU 0bf Sign: digest: D465E318F3985FA9047648CE413662CC929E6A9BB95F412A5F239918291EB031 
2020-09-23 14:04:06.824 UTC [msp.identity] Sign -> DEBU 0c0 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:06.824 UTC [msp.identity] Sign -> DEBU 0c1 Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:09.813 UTC [msp.identity] Sign -> DEBU 0c2 Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B161008 
2020-09-23 14:04:09.816 UTC [msp.identity] Sign -> DEBU 0c3 Sign: digest: 1093F0F5B1FAED8D2585AE708693AC3FB0DEF3E7142805328EA5399B06D4D8ED 
2020-09-23 14:04:09.816 UTC [msp.identity] Sign -> DEBU 0c4 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:09.816 UTC [msp.identity] Sign -> DEBU 0c5 Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:09.816 UTC [gossip.discovery] periodicalSendAlive -> DEBU 0c6 Sleeping 5s
2020-09-23 14:04:10.825 UTC [msp.identity] Sign -> DEBU 0c7 Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B161009 
2020-09-23 14:04:10.825 UTC [msp.identity] Sign -> DEBU 0c8 Sign: digest: E9B7F04F3383B1F719DBE2A40E141A90CB8D1AA96236D0A725B1283095D50039 
2020-09-23 14:04:10.825 UTC [msp.identity] Sign -> DEBU 0c9 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:10.826 UTC [msp.identity] Sign -> DEBU 0ca Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:14.817 UTC [msp.identity] Sign -> DEBU 0cb Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B16100A 
2020-09-23 14:04:14.817 UTC [msp.identity] Sign -> DEBU 0cc Sign: digest: D27D708DC170CECFBDF0C647F234EAA3C26F43197BB72B488021C1B11363AC0D 
2020-09-23 14:04:14.818 UTC [msp.identity] Sign -> DEBU 0cd Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:14.818 UTC [msp.identity] Sign -> DEBU 0ce Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:14.818 UTC [gossip.discovery] periodicalSendAlive -> DEBU 0cf Sleeping 5s
2020-09-23 14:04:14.826 UTC [msp.identity] Sign -> DEBU 0d0 Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B16100B 
2020-09-23 14:04:14.826 UTC [msp.identity] Sign -> DEBU 0d1 Sign: digest: 5ECB3E4A5A977FFABA3180DBBF2B25F4791198C5CAA9FE64283CAB04DD298586 
2020-09-23 14:04:14.826 UTC [msp.identity] Sign -> DEBU 0d2 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:14.826 UTC [msp.identity] Sign -> DEBU 0d3 Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:18.832 UTC [msp.identity] Sign -> DEBU 0d4 Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B16100C 
2020-09-23 14:04:18.832 UTC [msp.identity] Sign -> DEBU 0d5 Sign: digest: A59745063BA63D7161F554E4F9FF507246ABB1B10A9A863A3067FAC6C9181CB8 
2020-09-23 14:04:18.833 UTC [msp.identity] Sign -> DEBU 0d6 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:18.833 UTC [msp.identity] Sign -> DEBU 0d7 Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:19.810 UTC [gossip.discovery] periodicalReconnectToDead -> DEBU 0d8 Sleeping 25s
2020-09-23 14:04:19.819 UTC [msp.identity] Sign -> DEBU 0d9 Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B16100D 
2020-09-23 14:04:19.819 UTC [msp.identity] Sign -> DEBU 0da Sign: digest: D2A0C08E9DF91D8AC4F79353A870F8FEA2C1C02220419923299320E5D630EDF0 
2020-09-23 14:04:19.819 UTC [msp.identity] Sign -> DEBU 0db Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:19.820 UTC [msp.identity] Sign -> DEBU 0dc Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:19.820 UTC [gossip.discovery] periodicalSendAlive -> DEBU 0dd Sleeping 5s
2020-09-23 14:04:22.824 UTC [grpc] Infof -> DEBU 0de parsed scheme: ""
2020-09-23 14:04:22.824 UTC [grpc] Infof -> DEBU 0df scheme "" not registered, fallback to default scheme
2020-09-23 14:04:22.825 UTC [grpc] Infof -> DEBU 0e0 ccResolverWrapper: sending update to cc: {[{37.152.180.255:7101  <nil> 0 <nil>}] <nil> <nil>}
2020-09-23 14:04:22.825 UTC [grpc] Infof -> DEBU 0e1 ClientConn switching balancer to "pick_first"
2020-09-23 14:04:22.825 UTC [grpc] Infof -> DEBU 0e2 Channel switches to new LB policy "pick_first"
2020-09-23 14:04:22.825 UTC [grpc] Infof -> DEBU 0e3 Subchannel Connectivity change to CONNECTING
2020-09-23 14:04:22.826 UTC [grpc] Infof -> DEBU 0e4 Subchannel picks a new address "37.152.180.255:7101" to connect
2020-09-23 14:04:22.826 UTC [grpc] UpdateSubConnState -> DEBU 0e5 pickfirstBalancer: HandleSubConnStateChange: 0xc002ac4cf0, {CONNECTING <nil>}
2020-09-23 14:04:22.827 UTC [grpc] Infof -> DEBU 0e6 Channel Connectivity change to CONNECTING
2020-09-23 14:04:22.834 UTC [msp.identity] Sign -> DEBU 0e7 Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B16100E 
2020-09-23 14:04:22.834 UTC [msp.identity] Sign -> DEBU 0e8 Sign: digest: 658ED5816A5829605D9EA49CE1FAA7DD734D62E5FC9E3767D0DB43322518EB35 
2020-09-23 14:04:22.835 UTC [msp.identity] Sign -> DEBU 0e9 Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:22.835 UTC [msp.identity] Sign -> DEBU 0ea Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:24.820 UTC [msp.identity] Sign -> DEBU 0eb Sign: plaintext: 18012A470A370A1333372E3135322E31...AFF2120C089A8DD5DBA0D4DB9B16100F 
2020-09-23 14:04:24.821 UTC [msp.identity] Sign -> DEBU 0ec Sign: digest: A22AEC885AFB08930D238B4DDF00DFA163F5EA7860EF9F91F959BBC54EEDD5AA 
2020-09-23 14:04:24.821 UTC [msp.identity] Sign -> DEBU 0ed Sign: plaintext: 0A1B70656572322E6F7267312E6578616D706C652E636F6D3A37303531 
2020-09-23 14:04:24.821 UTC [msp.identity] Sign -> DEBU 0ee Sign: digest: C0477A909C942B81280F52131D4D72B70D1235D7FE4BAEEBD447599D86A3BCB3 
2020-09-23 14:04:24.822 UTC [gossip.discovery] periodicalSendAlive -> DEBU 0ef Sleeping 5s
2020-09-23 14:04:25.824 UTC [grpc] Infof -> DEBU 0f0 Channel Connectivity change to SHUTDOWN
2020-09-23 14:04:25.824 UTC [grpc] Infof -> DEBU 0f1 Subchannel Connectivity change to SHUTDOWN
2020-09-23 14:04:25.824 UTC [gossip.discovery] func1 -> WARN 0f2 Could not connect to Endpoint: 37.152.180.255:7101, InternalEndpoint: 37.152.180.255:7101, PKI-ID: <nil>, Metadata:  : context deadline exceeded
litong01 commented 3 years ago

@alirezaght it is very strange, I tested with your spec, and things worked as they should, I wonder if you are using old minifabric, can you please try to update minifabric? do this:

docker pull hfrd/minifab:latest

Then go to directory where your spec.yaml file is, then run the following commands:

minifab cleanup
minifab up -e 7100 -o org1.example.com -c test
litong01 commented 3 years ago

@alirezaght I've tested multiple times with your spec, everything worked without any issue.

alirezaght commented 3 years ago

I did pull the latest image but no luck:

root@node-2:~# docker pull hfrd/minifab:latest
latest: Pulling from hfrd/minifab
Digest: sha256:bc3bcb1d24e1969f60f6554b2b04e6792926042c4a30db3132985489e78d48f4
Status: Image is up to date for hfrd/minifab:latest
docker.io/hfrd/minifab:latest

this is the version of my docker:

root@node-2:/home/ubuntu/hplatform# docker version
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:02:59 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:01:30 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

this is the version of my ubuntu:

root@node-2:~# lsb_release  -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.5 LTS
Release:    16.04
Codename:   xenial
root@node-2:~# uname -a
Linux node-2 4.4.0-135-generic #161-Ubuntu SMP Mon Aug 27 10:45:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

here is the content of the minifab bash file:

#!/bin/bash
[ ! -d "$(pwd)/vars" ] && mkdir vars
ADDRS=$(ifconfig|grep 'inet '|grep -v '\.1 '|tr -s ' '|awk '{$1=$1};1'|cut -d ' ' -f 2|cut -d '/' -f 1|paste -sd "," -|sed s/addr://g)
if [ -f "$(pwd)/spec.yaml" ]; then
  docker run --rm --name minifab -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd)/vars:/home/vars \
  -v $(pwd)/spec.yaml:/home/spec.yaml -e "ADDRS=$ADDRS" hfrd/minifab:latest /home/main.sh "$@"
else
  docker run --rm --name minifab -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd)/vars:/home/vars \
  -e "ADDRS=$ADDRS" hfrd/minifab:latest /home/main.sh "$@"
fi
alirezaght commented 3 years ago

I even tried to delete all of docker images with this command

docker rmi --force $(docker images -a -q)

but again the problem happens. I don't know what I'm missing :(

litong01 commented 3 years ago

@alirezaght not really sure what the issue is. I started up a 16.04 VirtualBox VM and everything run perfectly using your spec. See below output. I wonder if your machine is running on windows? or some kind of nested virtual machine? The issue is that you node can not connect, seems a network problem.

ubuntu@u1604:~/mywork$ minifab up -e true -o org1.example.com -c test
Minifab Execution Context:
    FABRIC_RELEASE=2.2.0
    CHANNEL_NAME=test
    PEER_DATABASE_TYPE=golevel
    CHAINCODE_LANGUAGE=go
    CHAINCODE_NAME=simple
    CHAINCODE_VERSION=1.0
    CHAINCODE_INIT_REQUIRED=true
    CHAINCODE_PARAMETERS="init","a","200","b","300"
    CHAINCODE_PRIVATE=false
    CHAINCODE_POLICY=
    TRANSIENT_DATA=
    BLOCK_NUMBER=newest
    EXPOSE_ENDPOINTS=true
    CURRENT_ORG=org1.example.com
    HOST_ADDRESSES=192.168.56.8,10.0.3.15
    WORKING_DIRECTORY: /home/ubuntu/mywork
...
# Preparing for the following operations: *********************
  verify options, download images, generate certificates, start network, network status, channel create, channel join, anchor update, profile generation, cc install, cc approve, cc commit, cc initialize, discover
.......
# Running operation: ******************************************
  verify options
.
# Running operation: ******************************************
  download images
............
# Running operation: ******************************************
  generate certificates
...................................................................................
# Running operation: ******************************************
  start network
.......................
# Running operation: ******************************************
  network status
......
# Docker node status ******************************************
  d679af657c_cli : Up 6 seconds
  ca.org1.example.com : Up 6 seconds
  orderer.example.com : Up 7 seconds
  peer2.org1.example.com : Up 8 seconds
  peer1.org1.example.com : Up 8 seconds

# Fabric network peer and orderer node health status **********
  peer1.org1.example.com "OK"
  peer2.org1.example.com "OK"
   orderer.example.com "OK"

# Running operation: ******************************************
  channel create
......
# Running operation: ******************************************
  channel join
...............
# Running operation: ******************************************
  anchor update
.......
# Running operation: ******************************************
  profile generation
.........................
# Running operation: ******************************************
  cc install
.................
# Running operation: ******************************************
  cc approve
......
# Running operation: ******************************************
  cc commit
.............
# Running operation: ******************************************
  cc initialize
......
# Running operation: ******************************************
  discover
......................

# STATS *******************************************************
minifab: ok=294 failed=0

real    1m52.300s
user    0m45.288s
sys 0m9.400s
alirezaght commented 3 years ago

@litong01 Oh my god. I can't believe I didn't think of firewall. I was using ufw and there were some kind of firewall rules and docker containers could not connect to the host. I appreciate your help so much.

litong01 commented 3 years ago

@alirezaght not a problem. please close this issue. thanks.