feedhenry / fh-openshift-templates

FeedHenry MBaaS OpenShift Templates
http://feedhenry.org
Apache License 2.0
1 stars 24 forks source link

deployed failed #17

Closed vanloswang closed 6 years ago

vanloswang commented 7 years ago

I deployed by using fh-mbaas-template-1node.json on origin 1.4 alpha.1.

Steps:

  1. oc create -n openshift -f fh-mbaas-template-1node.json
  2. oc new-project live-mbaas
  3. oc edit ns live-mbaas, and set the node-selector
  4. oc create serviceaccount nagios
  5. oc policy add-role-to-user admin -z nagios
  6. login the web console and create application with the template of fh-mbaas in project live-mbass , only input MBAAS_ROUTER_DNS parameter and let the other to be generated

The logs of the mongodb pod is as following:

=> Waiting for container IP address ... 10.128.0.33:27017
=>  Waiting for MongoDB daemon up
note: noprealloc may hurt performance in many applications
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten] MongoDB starting : pid=22 port=27017 dbpath=/var/lib/mongodb/data 64-bit host=mongodb-1-1-u0eif
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten] db version v3.2.6
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten] modules: none
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten] build environment:
2016-12-18T15:03:10.521+0000 I CONTROL  [initandlisten]     distarch: x86_64
2016-12-18T15:03:10.522+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2016-12-18T15:03:10.522+0000 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { http: { enabled: false }, port: 27017 }, processManagement: { pidFilePath: "/var/lib/mongodb/mongodb.pid" }, replication: { oplogSizeMB: 64 }, storage: { dbPath: "/var/lib/mongodb/data", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { quiet: true } }
2016-12-18T15:03:10.604+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-12-18T15:03:10.723+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2016-12-18T15:03:10.723+0000 I CONTROL  [initandlisten] 
2016-12-18T15:03:10.723+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-12-18T15:03:10.723+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-12-18T15:03:10.723+0000 I CONTROL  [initandlisten] 
2016-12-18T15:03:10.723+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-12-18T15:03:10.723+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-12-18T15:03:10.723+0000 I CONTROL  [initandlisten] 
2016-12-18T15:03:10.725+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/data/diagnostic.data'
2016-12-18T15:03:10.725+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-12-18T15:03:10.794+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
=>  Waiting for MongoDB daemon up
=> MongoDB daemon is up
MongoDB shell version: 3.2.6
connecting to: localhost:27017/admin
Successfully added user: {
    "user" : "admin",
    "roles" : [
        "dbAdminAnyDatabase",
        "userAdminAnyDatabase",
        "readWriteAnyDatabase",
        "clusterAdmin"
    ]
}
=> setting up RHMAP databases
=> setting up fh-mbaas db .. 
MongoDB shell version: 3.2.6
connecting to: localhost:27017/admin
Successfully added user: { "user" : "mbaas-user", "roles" : [ "readWrite" ] }
=> setting up fh-reporting db.. 
MongoDB shell version: 3.2.6
connecting to: localhost:27017/admin
Successfully added user: { "user" : "reporting-user", "roles" : [ "readWrite" ] }
note: noprealloc may hurt performance in many applications
killing process with pid: 22
2016-12-18T15:03:11.961+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2016-12-18T15:03:11.961+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2016-12-18T15:03:11.961+0000 I CONTROL  [signalProcessingThread] now exiting
2016-12-18T15:03:11.961+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2016-12-18T15:03:11.961+0000 I NETWORK  [signalProcessingThread] closing listening socket: 5
2016-12-18T15:03:11.961+0000 I NETWORK  [signalProcessingThread] closing listening socket: 6
2016-12-18T15:03:11.961+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2016-12-18T15:03:11.961+0000 I NETWORK  [signalProcessingThread] shutdown: going to flush diaglog...
2016-12-18T15:03:11.961+0000 I NETWORK  [signalProcessingThread] shutdown: going to close sockets...
2016-12-18T15:03:11.961+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2016-12-18T15:03:12.015+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2016-12-18T15:03:12.015+0000 I CONTROL  [signalProcessingThread] dbexit:  rc: 0
=>  Waiting for MongoDB daemon down
=> MongoDB daemon is down
note: noprealloc may hurt performance in many applications
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/var/lib/mongodb/data 64-bit host=mongodb-1-1-u0eif
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] db version v3.2.6
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] modules: none
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] build environment:
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten]     distarch: x86_64
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2016-12-18T15:03:13.029+0000 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { http: { enabled: false }, port: 27017 }, processManagement: { pidFilePath: "/var/lib/mongodb/mongodb.pid" }, replication: { oplogSizeMB: 64 }, security: { authorization: "enabled" }, storage: { dbPath: "/var/lib/mongodb/data", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { quiet: true } }
2016-12-18T15:03:13.059+0000 I -        [initandlisten] Detected data files in /var/lib/mongodb/data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2016-12-18T15:03:13.059+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-12-18T15:03:13.309+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2016-12-18T15:03:13.309+0000 I CONTROL  [initandlisten] 
2016-12-18T15:03:13.309+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-12-18T15:03:13.309+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-12-18T15:03:13.309+0000 I CONTROL  [initandlisten] 
2016-12-18T15:03:13.309+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-12-18T15:03:13.309+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-12-18T15:03:13.309+0000 I CONTROL  [initandlisten] 
2016-12-18T15:03:13.310+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/data/diagnostic.data'
2016-12-18T15:03:13.310+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-12-18T15:03:13.311+0000 I NETWORK  [initandlisten] waiting for connections on port 27017

the logs of the fh-mbass pod is as following:

starting single master process
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":50,"msg":"AMQP not enabled. Please check conf.json file.","time":"2016-12-18T15:19:12.757Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":40,"msg":"Skipping amqp setup for deploy status listerner","time":"2016-12-18T15:19:12.758Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":40,"msg":"Skipping amqp setup for migration status listerner","time":"2016-12-18T15:19:12.758Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":50,"msg":"AMQP not enabled. Please check conf.json file.","time":"2016-12-18T15:19:12.762Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":40,"msg":"Skipping amqp setup for deploy status listerner","time":"2016-12-18T15:19:12.762Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":40,"msg":"Skipping amqp setup for migration status listerner","time":"2016-12-18T15:19:12.762Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":30,"msg":"Logger created","time":"2016-12-18T15:19:12.766Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":30,"msg":"Logger created","time":"2016-12-18T15:19:12.783Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":50,"msg":"Mongo connection lost. Socket closed","time":"2016-12-18T15:19:52.538Z","v":0}
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":50,"msg":"FATAL: UncaughtException, please report: [Error: Mongo close even emitted]","time":"2016-12-18T15:19:52.539Z","v":0}
Sun Dec 18 2016 15:19:52 GMT+0000 (UTC) FATAL: UncaughtException, please report: [Error: Mongo close even emitted]
Trace: Error: Mongo close even emitted
    at NativeConnection.<anonymous> (/opt/app-root/src/fh-mbaas.js:314:13)
    at emitNone (events.js:67:13)
    at NativeConnection.emit (events.js:166:7)
    at NativeConnection.Object.defineProperty.set (/opt/app-root/src/node_modules/mongoose/lib/connection.js:117:12)
    at /opt/app-root/src/node_modules/mongoose/lib/connection.js:472:24
    at /opt/app-root/src/node_modules/mongoose/lib/drivers/node-mongodb-native/connection.js:69:21
    at /opt/app-root/src/node_modules/mongodb/lib/db.js:231:14
    at null.<anonymous> (/opt/app-root/src/node_modules/mongodb/lib/server.js:240:9)
    at g (events.js:260:16)
    at emitTwo (events.js:87:13)
    at emit (events.js:172:7)
    at null.<anonymous> (/opt/app-root/src/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:218:12)
    at g (events.js:260:16)
    at emitTwo (events.js:87:13)
    at emit (events.js:172:7)
    at null.<anonymous> (/opt/app-root/src/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js:119:12)
    at g (events.js:260:16)
    at emitTwo (events.js:87:13)
    at emit (events.js:172:7)
    at Socket.<anonymous> (/opt/app-root/src/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js:151:49)
    at Socket.g (events.js:260:16)
    at emitOne (events.js:77:13)
    at Socket.emit (events.js:169:7)
    at connectErrorNT (net.js:996:8)
    at /opt/app-root/src/node_modules/fh-forms/node_modules/fh-logger/node_modules/continuation-local-storage/node_modules/async-listener/glue.js:188:31
    at nextTickCallbackWith2Args (node.js:442:9)
    at process._tickCallback (node.js:356:17)
    at process.<anonymous> (/opt/app-root/src/fh-mbaas.js:329:15)
    at emitOne (events.js:77:13)
    at process.emit (events.js:169:7)
    at process._fatalException (node.js:224:26)
    at process._asyncFatalException [as _fatalException] (/opt/app-root/src/node_modules/fh-forms/node_modules/fh-logger/node_modules/continuation-local-storage/node_modules/async-listener/glue.js:211:34)
{"name":"mbaas","hostname":"fh-mbaas-1-e58qc","pid":1,"level":50,"msg":"'Error: Mongo close even emitted\\n    at NativeConnection.<anonymous> (/opt/app-root/src/fh-mbaas.js:314:13)\\n    at emitNone (events.js:67:13)\\n    at NativeConnection.emit (events.js:166:7)\\n    at NativeConnection.Object.defineProperty.set (/opt/app-root/src/node_modules/mongoose/lib/connection.js:117:12)\\n    at /opt/app-root/src/node_modules/mongoose/lib/connection.js:472:24\\n    at /opt/app-root/src/node_modules/mongoose/lib/drivers/node-mongodb-native/connection.js:69:21\\n    at /opt/app-root/src/node_modules/mongodb/lib/db.js:231:14\\n    at null.<anonymous> (/opt/app-root/src/node_modules/mongodb/lib/server.js:240:9)\\n    at g (events.js:260:16)\\n    at emitTwo (events.js:87:13)\\n    at emit (events.js:172:7)\\n    at null.<anonymous> (/opt/app-root/src/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:218:12)\\n    at g (events.js:260:16)\\n    at emitTwo (events.js:87:13)\\n    at emit (events.js:172:7)\\n    at null.<anonymous> (/opt/app-root/src/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js:119:12)\\n    at g (events.js:260:16)\\n    at emitTwo (events.js:87:13)\\n    at emit (events.js:172:7)\\n    at Socket.<anonymous> (/opt/app-root/src/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js:151:49)\\n    at Socket.g (events.js:260:16)\\n    at emitOne (events.js:77:13)\\n    at Socket.emit (events.js:169:7)\\n    at connectErrorNT (net.js:996:8)\\n    at /opt/app-root/src/node_modules/fh-forms/node_modules/fh-logger/node_modules/continuation-local-storage/node_modules/async-listener/glue.js:188:31\\n    at nextTickCallbackWith2Args (node.js:442:9)\\n    at process._tickCallback (node.js:356:17)'","time":"2016-12-18T15:19:52.541Z","v":0}

the logs of the metrics pod is as following:

Starting single master process
Initialise database tearUp
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 1
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 2
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 3
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 4
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 5
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 6
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 7
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 8
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 9
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 10
Error connecting to database: "Can not connect to MongoDB after 10 attempts."
Stopping fh-metrics...
fh-metrics stopped
Cluster worker #undefinedis exiting, attempting clean server shutdown
Shutting down MetricsServer
Stopping MetricsServer...
Stopping Messaging
Tear down complete. Database closed gracefully. Killing worker #undefined
{"name":"metrics","hostname":"fh-metrics-1-v2a63","pid":1,"level":30,"msg":"Killing master process","time":"2016-12-18T15:05:48.187Z","v":0}
fh-metrics stopped
Cluster worker #undefinedis exiting, attempting clean server shutdown
Shutting down MetricsServer
Stopping MetricsServer...
Stopping Messaging
Tear down complete. Database closed gracefully. Killing worker #undefined
{"name":"metrics","hostname":"fh-metrics-1-v2a63","pid":1,"level":30,"msg":"Killing master process","time":"2016-12-18T15:05:48.189Z","v":0}

the logs of messagings pod is as following:

Starting single master process
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"tearUp db","time":"2016-12-18T15:03:09.896Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"message-whitelist":{"userlogin":true,"useractivate":true,"appbuild":true,"appcreate":true,"apicalled":true,"fhact":true,"fhweb":true,"appinit":true},"msg":"","time":"2016-12-18T15:03:09.909Z","v":0}
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 1
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 2
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 3
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 4
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 5
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 6
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 7
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 8
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 9
Failed to connect to db: MongoError: getaddrinfo ENOTFOUND mongodb-1 mongodb-1:27017 . Attempt: 10
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":50,"msg":"Error connecting to database: \"Can not connect to MongoDB after 10 attempts.\"","time":"2016-12-18T15:05:52.145Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":40,"msg":"Stopping fh-messaging...","time":"2016-12-18T15:05:52.145Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Cluster worker #undefinedis exiting, attempting clean server shutdown","time":"2016-12-18T15:05:52.146Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Shutting down MessageServer","time":"2016-12-18T15:05:52.146Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Stopping MessageServer...","time":"2016-12-18T15:05:52.146Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":50,"msg":"Caught Server shutdown error: TypeError: Cannot read property 'tearDown' of undefined","time":"2016-12-18T15:05:52.147Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Killing master process","time":"2016-12-18T15:05:52.147Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Cluster worker #undefinedis exiting, attempting clean server shutdown","time":"2016-12-18T15:05:52.147Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Shutting down MessageServer","time":"2016-12-18T15:05:52.148Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Stopping MessageServer...","time":"2016-12-18T15:05:52.148Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":50,"msg":"Caught Server shutdown error: TypeError: Cannot read property 'tearDown' of undefined","time":"2016-12-18T15:05:52.149Z","v":0}
{"name":"messaging","hostname":"fh-messaging-1-mh12q","pid":1,"level":30,"msg":"Killing master process","time":"2016-12-18T15:05:52.149Z","v":0}

all of them failed with communication with the mongodb, so how to deploy an all in one environment?

grdryn commented 7 years ago

@vanloswang Hi, I'm sorry that I didn't see this until now. Have you managed to get it running since you had this issue? If not, I'll try to help you as best I can here. Also, feel free to join the #feedhenry IRC channel on FreeNode. I'm grdryn there. :)

Here are a couple of things that you could try when troubleshooting:

If you oc rsh into the mongodb container (or connect to the terminal in the OpenShift web console), can you get a mongo shell?

You can rsh into the container like this (your Pod name will have a different suffix):

$ oc rsh mongodb-1-1-nt026 
sh-4.2$

Once you are inside the pod, you can open an MongoDB connection as the admin user as follows:

sh-4.2$ mongo admin -u admin -p ${MONGODB_ADMIN_PASSWORD} 
MongoDB shell version: 3.2.6
connecting to: admin
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
    http://docs.mongodb.org/
Questions? Try the support group
    http://groups.google.com/group/mongodb-user
Server has startup warnings: 
2017-02-07T14:56:57.262+0000 I CONTROL  [initandlisten] 
2017-02-07T14:56:57.262+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2017-02-07T14:56:57.262+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2017-02-07T14:56:57.262+0000 I CONTROL  [initandlisten] 
2017-02-07T14:56:57.262+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2017-02-07T14:56:57.262+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2017-02-07T14:56:57.262+0000 I CONTROL  [initandlisten] 
> 

If that works, then at least the admin user was created successfully. While this connection is open, you can see if the databases got created correctly:

> show databases
admin         0.000GB
fh-mbaas      0.000GB
fh-reporting  0.000GB
local         0.000GB
> 

The fh-mbaas and fh-reporting databases should be created there (the fh-reporting database is used by both fh-messaging and fh-metrics, and fh-statsd doesn't have a database).

If those databases are there, you can exit the mongo connection and try to connect to each of those as their respective user:

mongo ${MONGODB_FHMBAAS_DATABASE} -u ${MONGODB_FHMBAAS_USER} -p ${MONGODB_FHMBAAS_PASSWORD}
MongoDB shell version: 3.2.6
connecting to: fh-mbaas
> 
grdryn commented 7 years ago

@vanloswang looking at the output from your services again, it looks like there might be a networking issue in your OpenShift setup, as they can't seem to find mongodb-1:27017. Either that, or the MongoDB container crashed (with the 1node template that you're using, if MongoDB crashes, you may need to start again, as it doesn't persist the DB info -- consider using the fh-mbaas-template-1node-persistent.json template instead, but note that you'll need a PV for MongoDB, and one for Nagios with that template).