tech-bureau / catapult-service-bootstrap

Starter project to get developers up and running with a running Catapult Service
Apache License 2.0
48 stars 45 forks source link

docker_generate-raw-addresses_1 exited with code 127 #36

Closed gologo13 closed 4 years ago

gologo13 commented 4 years ago

Hi, I'm following this tutorial now. I ran ./cmds/start-all but could not find addresses.yaml file in build/generated-addresses/ directory. It turns out docker_generate-raw-addresses_1 exited with abnormal error code. would you please take a look at this issue? thanks!

Environment

OS: macOS Catalina 10.15.1
Docker: version 19.03.5, build 633a0ea
docker-compose : 1.24.1, build 4667896b
node.js: v10.15.0
catapult-service-bootstrap: tags/0.9.1.1-public-test

Log

$ ./cmds/start-all                                                                                                                                 (git)-[tags/0.9.1.1-public-test] 
Pulling rest-gateway ... done
Starting docker_db_1                     ... done
Starting docker_api-node-broker-0_1      ... done
Starting docker_store-addresses_1        ... done
Starting docker_setup-network_1          ... done
Starting docker_generate-raw-addresses_1 ... done
Starting docker_init-db_1                ... done
Starting docker_peer-node-1-nemgen_1     ... done
Starting docker_api-node-0-nemgen_1      ... done
Starting docker_peer-node-0-nemgen_1     ... done
Starting docker_peer-node-1_1            ... done
Starting docker_rest-gateway_1           ... done
Starting docker_api-node-0_1             ... done
Starting docker_peer-node-0_1            ... done
Attaching to docker_db_1, docker_generate-raw-addresses_1, docker_setup-network_1, docker_api-node-broker-0_1, docker_store-addresses_1, docker_peer-node-1-nemgen_1, docker_init-db_1, docker_api-node-0-nemgen_1, docker_peer-node-0-nemgen_1, docker_rest-gateway_1, docker_peer-node-1_1, docker_peer-node-0_1, docker_api-node-0_1
db_1                      | 2019-12-10T22:48:04.262+0000 I  CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
generate-raw-addresses_1  | /bin-mount/generate-raw-addresses-if-needed: line 7: /usr/catapult/bin/catapult.tools.address: No such file or directory
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/dbdata 64-bit host=3e0c61a7d7b2
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] db version v4.2.1
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] git version: edf6d45851c0b9ee15548f0f847df141764a317e
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1  11 Sep 2018
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] allocator: tcmalloc
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] modules: none
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] build environment:
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten]     distmod: ubuntu1804
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten]     distarch: x86_64
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten]     target_arch: x86_64
db_1                      | 2019-12-10T22:48:04.265+0000 I  CONTROL  [initandlisten] options: { net: { bindIp: "db" }, storage: { dbPath: "/dbdata" } }
db_1                      | 2019-12-10T22:48:04.274+0000 W  STORAGE  [initandlisten] Detected unclean shutdown - /dbdata/mongod.lock is not empty.
db_1                      | 2019-12-10T22:48:04.283+0000 I  STORAGE  [initandlisten] Detected data files in /dbdata created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
db_1                      | 2019-12-10T22:48:04.290+0000 W  STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
db_1                      | 2019-12-10T22:48:04.291+0000 I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=989M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
store-addresses_1         | /ruby/lib/catapult/addresses.rb:59:in `break_into_sections': Not enough addresses (RuntimeError)
store-addresses_1         |     from /ruby/lib/catapult/addresses.rb:29:in `parse'
docker_generate-raw-addresses_1 exited with code 127
store-addresses_1         |     from /ruby/bin/store-addresses-if-needed.rb:20:in `<main>'
docker_store-addresses_1 exited with code 1
docker_init-db_1 exited with code 1
db_1                      | 2019-12-10T22:48:11.406+0000 I  STORAGE  [initandlisten] WiredTiger message [1576018091:406060][1:0x7fc986c82b00], txn-recover: Recovering log 1 through 2
db_1                      | 2019-12-10T22:48:11.651+0000 I  STORAGE  [initandlisten] WiredTiger message [1576018091:651562][1:0x7fc986c82b00], txn-recover: Recovering log 2 through 2
db_1                      | 2019-12-10T22:48:11.881+0000 I  STORAGE  [initandlisten] WiredTiger message [1576018091:881919][1:0x7fc986c82b00], txn-recover: Main recovery loop: starting at 1/20864 to 2/256
db_1                      | 2019-12-10T22:48:11.900+0000 I  STORAGE  [initandlisten] WiredTiger message [1576018091:900488][1:0x7fc986c82b00], txn-recover: Recovering log 1 through 2
db_1                      | 2019-12-10T22:48:12.206+0000 I  STORAGE  [initandlisten] WiredTiger message [1576018092:206919][1:0x7fc986c82b00], txn-recover: Recovering log 2 through 2
db_1                      | 2019-12-10T22:48:12.248+0000 I  STORAGE  [initandlisten] WiredTiger message [1576018092:248688][1:0x7fc986c82b00], txn-recover: Set global recovery timestamp: (0,0)
db_1                      | 2019-12-10T22:48:12.265+0000 I  RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1                      | 2019-12-10T22:48:12.278+0000 I  STORAGE  [initandlisten] Timestamp monitor starting
db_1                      | 2019-12-10T22:48:12.281+0000 I  CONTROL  [initandlisten] 
db_1                      | 2019-12-10T22:48:12.281+0000 I  CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
db_1                      | 2019-12-10T22:48:12.282+0000 I  CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
db_1                      | 2019-12-10T22:48:12.282+0000 I  CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
db_1                      | 2019-12-10T22:48:12.282+0000 I  CONTROL  [initandlisten] 
db_1                      | 2019-12-10T22:48:12.289+0000 I  SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
db_1                      | 2019-12-10T22:48:12.297+0000 I  STORAGE  [initandlisten] Flow Control is enabled on this deployment.
db_1                      | 2019-12-10T22:48:12.297+0000 I  SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
db_1                      | 2019-12-10T22:48:12.298+0000 I  SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
db_1                      | 2019-12-10T22:48:12.306+0000 I  SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
db_1                      | 2019-12-10T22:48:12.307+0000 I  FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/dbdata/diagnostic.data'
db_1                      | 2019-12-10T22:48:12.311+0000 I  SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>
db_1                      | 2019-12-10T22:48:12.312+0000 I  SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>
db_1                      | 2019-12-10T22:48:12.312+0000 I  NETWORK  [initandlisten] Listening on /tmp/mongodb-27017.sock
db_1                      | 2019-12-10T22:48:12.312+0000 I  NETWORK  [initandlisten] Listening on 172.27.0.3
db_1                      | 2019-12-10T22:48:12.313+0000 I  NETWORK  [initandlisten] waiting for connections on port 27017
db_1                      | 2019-12-10T22:48:13.011+0000 I  FTDC     [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
db_1                      | 2019-12-10T22:48:13.052+0000 I  SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
gologo13 commented 4 years ago
NOTE: after releases with docker image updates, or if switching between versions it is typical to need to build new images when starting the docker services, this can be done by passing the -b flag to any of the commands which will pass to docker-compose. Should you run into weird behavior sometimes it helps or is required to clear out old images, this can be done with the docker system prune -a command, this will purge all container references and force download and build on next run.

I didn't notice this NOTE. yes, passing -b option resolved this issue.