hyperledger-archives / fabric

THIS IS A READ-ONLY historic repository. Current development is at https://gerrit.hyperledger.org/r/#/admin/projects/fabric . pull requests not accepted
https://gerrit.hyperledger.org/
Apache License 2.0
1.17k stars 1.01k forks source link

Peer Incurs Panic when Starting Peer with Incompatible Consensus Algorithm (pbft with 1 peer) #1853

Open bmos299 opened 8 years ago

bmos299 commented 8 years ago

I am using commit ee024d5. I started a single peer with pbft. This caused a go panic. This is a misconfiguration, but I didn't expect a panic. Here is the log and backtrace.

19:06:29.366 [crypto] func1 -> INFO 006 Registering validator [test_vp0] with name [test_vp0]...done! 19:06:29.366 [crypto] func1 -> INFO 007 Initializing validator [test_vp0]... 19:06:29.492 [crypto] func1 -> INFO 008 Initializing validator [test_vp0]...done! 19:06:29.492 [chaincode] NewChaincodeSupport -> INFO 009 Chaincode support using peerAddress: 172.17.0.1:30001 19:06:30.043 [state] loadConfig -> INFO 00a Loading configurations... 19:06:30.043 [state] loadConfig -> INFO 00b Configurations loaded. stateImplName=[buckettree], stateImplConfigs=map[numBuckets:%!s(int=1000003) maxGroupingAtEachLevel:%!s(int=5) bucketCacheSize:%!s(int=100)], deltaHistorySize=[500] 19:06:30.043 [state] NewState -> INFO 00c Initializing state implementation [buckettree] 19:06:30.043 [buckettree] initConfig -> INFO 00d configs passed during initialization = map[string]interface {}{"numBuckets":1000003, "maxGroupingAtEachLevel":5, "bucketCacheSize":100} 19:06:30.043 [buckettree] initConfig -> INFO 00e Initializing bucket tree state implemetation with configurations &{maxGroupingAtEachLevel:5 lowestLevel:9 levelToNumBucketsMap:map[8:200001 0:1 9:1000003 6:8001 3:65 1:3 4:321 2:13 5:1601 7:40001] hashFunc:0xa95190} 19:06:30.043 [buckettree] newBucketCache -> INFO 00f Constructing bucket-cache with max bucket cache size = [100] MBs 19:06:30.043 [buckettree] loadAllBucketNodesFromDB -> INFO 010 Loaded buckets data in cache. Total buckets in DB = [0]. Total cache size:=0 19:06:30.043 [genesis] func1 -> INFO 011 Creating genesis block. 19:06:30.043 [genesis] loadConfigs -> INFO 012 Loading configurations... 19:06:30.043 [genesis] loadConfigs -> INFO 013 Configurations loaded: genesis=map[chaincodes:], mode=[], deploySystemChaincodeEnabled=[false] 19:06:30.043 [genesis] func1 -> INFO 014 No genesis block chaincodes defined. 19:06:30.043 [genesis] 1 -> INFO 015 Adding 0 system chaincodes to the genesis block. 19:06:30.044 [consensus/controller] NewConsenter -> INFO 016 Creating consensus plugin pbft panic: need at least 4 enough replicas to tolerate 1 byzantine faults, but only 1 replicas configured

goroutine 1 [running]: panic(0xbc9cc0, 0xc8200eab50) /opt/go/src/runtime/panic.go:464 +0x3e6 github.com/hyperledger/fabric/consensus/obcpbft.newPbftCore(0x0, 0xc8201452b0, 0x7fe2f74eb208, 0xc8205329c0, 0x7fe2f74eb1e0, 0xc8200ea720, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/consensus/obcpbft/pbft-core.go:227 +0x405 github.com/hyperledger/fabric/consensus/obcpbft.newObcBatch(0x0, 0xc8201452b0, 0x7fe2f74eb078, 0xc820215c80, 0x1) /opt/gopath/src/github.com/hyperledger/fabric/consensus/obcpbft/obc-batch.go:96 +0x4b0 github.com/hyperledger/fabric/consensus/obcpbft.New(0x7fe2f74eb078, 0xc820215c80, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/consensus/obcpbft/obc-pbft.go:60 +0x156 github.com/hyperledger/fabric/consensus/obcpbft.GetPlugin(0x7fe2f74eb078, 0xc820215c80, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/consensus/obcpbft/obc-pbft.go:45 +0x48 github.com/hyperledger/fabric/consensus/controller.NewConsenter(0x7fe2f74eb078, 0xc820215c80, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/consensus/controller/controller.go:43 +0x1bc github.com/hyperledger/fabric/consensus/helper.GetEngine.func1() /opt/gopath/src/github.com/hyperledger/fabric/consensus/helper/engine.go:116 +0xda sync.(_Once).Do(0x155bd60, 0xc8202c5550) /opt/go/src/sync/once.go:44 +0xe4 github.com/hyperledger/fabric/consensus/helper.GetEngine(0x7fe2f74eac48, 0xc820165e00, 0x0, 0x0, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/consensus/helper/engine.go:129 +0x7d github.com/hyperledger/fabric/core/peer.NewPeerWithEngine(0xc8202c59b8, 0x1009508, 0x7fe2f74eac18, 0xc82021ac80, 0xc820165e00, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/core/peer/peer.go:264 +0x3ed main.serve(0x155bcd8, 0x0, 0x0, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:480 +0x122d main.glob.func3(0x151e300, 0x155bcd8, 0x0, 0x0, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:96 +0x41 github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(_Command).execute(0x151e300, 0x155bcd8, 0x0, 0x0, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:497 +0x62c github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).Execute(0x151df80, 0x0, 0x0) /opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:584 +0x46a main.main() /opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:316 +0x19ea

tuand27613 commented 8 years ago

Linking to #752 where we're trying to determine error handling on a fabric wide basis.

In this case, given that we don't have a way to recover and that the invalid values might indicate a greater problem, we stop in a way that garners the most attention.