Closed deepu105 closed 8 years ago
Strange, I don't have this error.. It seems that logstash couldn't find the file logstash.conf
located in log-monitoring/log-config
.
yes in the config I see below, shouldnt this be pointing to og-monitoring/log-config/logstash.conf
instead of /config-dir/logstash.conf
@wmarques @cbornet @gmarziou @pascalgrimaud @PierreBesson any one have a quick hack or suggestion for this. I really wanted to demo the kibana dashoard tomorrow
elk-logstash:
image: logstash:2.2.2
volumes:
- ./log-monitoring/log-config/:/config-dir
command: logstash -f /config-dir/logstash.conf
ports:
- "5000:5000/udp"
Ill try to change the paths for now and see
It give same error even if i change the path
Error: No config files found: /log-monitoring/log-config/logstash.conf
Can you make sure this path is a logstash config file?
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.```
The kibana dashboard also seems to fail without this, which is understandable
Hi @deepu105,
config-dir/logstash.conf
is the path inside the container. So the path is correct. The problem is that it seems it can't get the file from the mounted volume.
@deepu105, quick hack would be this:
docker ps
to get the container name
docker cp log-monitoring/log-config/logstash.conf container-name:/config-dir
docker-compose restart elk-logstash
Be carefull about mounting volume with Windows If I remember well, we can't mount everything with Windows. The folder must be under C:\Users\xxxx\yyyy...
this one mounts I guess only DB volues doesnt, anyway I was able to copy the config as @PierreBesson said and restarted the logstash container it seems to be up, it shows below log and my kibana dashboars looks like below
{:timestamp=>"2016-03-15T10:06:05.201000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2016-03-15T10:06:05.223000+0000", :message=>"UDP listener died", :exception=>#<IOError: closed stream>, :backtrace=>["org/jruby/RubyIO.java:3682:in `select'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.3/lib/logstash/inputs/udp.rb:77:in `udp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.3/lib/logstash/inputs/udp.rb:50:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.2-java/lib/logstash/pipeline.rb:331:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.2-java/lib/logstash/pipeline.rb:325:in `start_input'"], :level=>:warn}
the elasticserach container shows the below trace
[2016-03-15 10:08:14,878][INFO ][rest.suppressed ] /logstash-*/_mapping/field/* Params: {ignore_unavailable=false, allow_no_indices=false, index=logstash-*, include_defaults=true, fields=*, _=1458036494690}
[logstash-*] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:659)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:57)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:40)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:351)
at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)
at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:351)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1187)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getFieldMappings(AbstractClient.java:1402)
at org.elasticsearch.rest.action.admin.indices.mapping.get.RestGetFieldMappingAction.handleRequest(RestGetFieldMappingAction.java:66)
at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54)
at org.elasticsearch.rest.RestController.executeHandler(RestController.java:207)
at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166)
at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)
at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)
at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:363)
at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:63)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
how does the dashboard look for you guys? can someone give me some screengrabs so that I can show that if this doesnt work
@deepu105 Are you using the jhipster-console project ? I see your screen the first time I go to the "Discover" tab but after going to "Dashboard" then it works fine. Also do you have data sent to ES by at least one app ?
Yes im using the jhipster-console image created by this generator. I tried all links its still the same. I have 2 apps and gateway sending date
here is some logs on the console container
Waiting for Elasticsearch to startup
{
"name" : "Libra",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.2.0",
"build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp" : "2016-01-27T13:32:39Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
Loading dashboards
Loading dashboards to http://elk-elasticsearch:9200 in .kibana
Loading search Metrics:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"_index":".kibana","_type":"search","_id":"Metrics","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1167 0 0 100 1167 0 1095 0:00:01 0:00:01 --:--:-- 1095 100 1167 0 0 100 1167 0 564 0:00:02 0:00:02 --:--:-- 564 100 1167 0 0 100 1167 0 380 0:00:03 0:00:03 --:--:-- 380 100 1167 0 0 100 1167 0 286 0:00:04 0:00:04 --:--:-- 286 100 1167 0 0 100 1167 0 230 0:00:05 0:00:05 --:--:-- 230 100 1297 100 130 100 1167 24 222 0:00:05 0:00:05 --:--:-- 0
Loading visualization ehcache-hits:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"_index":".kibana","_type":"visualization","_id":"ehcache-hits","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
Loading visualization ehcache-misses:
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1410 100 142 100 1268 749 6695 --:--:-- --:--:-- --:--:-- 6708
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"_index":".kibana","_type":"visualization","_id":"ehcache-misses","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
Loading visualization logs-levels:
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1392 100 144 100 1248 1010 8759 --:--:-- --:--:-- --:--:-- 8788
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"_index":".kibana","_type":"visualization","_id":"logs-levels","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
Loading visualization memory-used:
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1779 100 141 100 1638 639 7432 --:--:-- --:--:-- --:--:-- 7479
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"_index":".kibana","_type":"visualization","_id":"memory-used","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1288 100 141 100 1147 930 7573 --:--:-- --:--:-- --:--:-- 7546
Loading dashboard ehcache-dashboard:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 833 100 143 100 690 650 3137 --:--:-- --:--:-- --:--:-- 3150
{"_index":".kibana","_type":"dashboard","_id":"ehcache-dashboard","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
Loading dashboard jhipster-simple-dashboard:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 764 100 151 100 613 1133 4602 --:--:-- --:--:-- --:--:-- 4643
{"_index":".kibana","_type":"dashboard","_id":"jhipster-simple-dashboard","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
Loading index pattern logstash-*:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"_index":".kibana","_type":"index-pattern","_id":"logstash-*","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}
Configuring default settings
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 197 100 140 100 57 1061 432 --:--:-- --:--:-- --:--:-- 1068
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 194 100 128 100 66 1654 853 --:--:-- --:--:-- --:--:-- 1662
{"_index":".kibana","_type":"config","_id":"4.4.1","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false}Starting Kibana
{"type":"log","@timestamp":"2016-03-15T09:31:57+00:00","tags":["warning","config"],"pid":1,"key":"bundled_plugin_ids","val":["plugins/dashboard/index","plugins/discover/index","plugins/doc/index","plugins/kibana/index","plugins/markdown_vis/index","plugins/metric_vis/index","plugins/settings/index","plugins/table_vis/index","plugins/vis_types/index","plugins/visualize/index"],"message":"Settings for \"bundled_plugin_ids\" were not applied, check for spelling errors and ensure the plugin is loaded."}
{"type":"log","@timestamp":"2016-03-15T09:32:00+00:00","tags":["status","plugin:kibana","info"],"pid":1,"name":"plugin:kibana","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:00+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:12+00:00","tags":["status","plugin:timelion","info"],"pid":1,"name":"plugin:timelion","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["status","plugin:kbn_vislib_vis_types","info"],"pid":1,"name":"plugin:kbn_vislib_vis_types","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["status","plugin:markdown_vis","info"],"pid":1,"name":"plugin:markdown_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["status","plugin:metric_vis","info"],"pid":1,"name":"plugin:metric_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["status","plugin:spyModes","info"],"pid":1,"name":"plugin:spyModes","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["status","plugin:statusPage","info"],"pid":1,"name":"plugin:statusPage","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["status","plugin:table_vis","info"],"pid":1,"name":"plugin:table_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2016-03-15T09:32:13+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2016-03-15T09:32:29+00:00","tags":["status","plugin:elasticsearch","error"],"pid":1,"name":"plugin:elasticsearch","state":"red","message":"Status changed from green to red - Request Timeout after 1500ms","prevState":"green","prevMsg":"Kibana index ready"}
{"type":"log","@timestamp":"2016-03-15T09:32:59+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 1500ms"}
{"type":"log","@timestamp":"2016-03-15T09:41:03+00:00","tags":["status","plugin:elasticsearch","error"],"pid":1,"name":"plugin:elasticsearch","state":"red","message":"Status changed from green to red - Request Timeout after 1500ms","prevState":"green","prevMsg":"Kibana index ready"}
{"type":"log","@timestamp":"2016-03-15T09:41:46+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 1500ms"}
{"type":"response","@timestamp":"2016-03-15T09:42:06+00:00","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/","method":"get","headers":{"host":"192.168.99.100:5601","connection":"keep-alive","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8","cookie":"NG_TRANSLATE_LANG_KEY=%22en%22"},"remoteAddress":"192.168.99.1","userAgent":"192.168.99.1"},"res":{"statusCode":200,"responseTime":131,"contentLength":9},"message":"GET / 200 131ms - 9.0B"}
{"type":"response","@timestamp":"2016-03-15T09:42:06+00:00","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/app/kibana","method":"get","headers":{"host":"192.168.99.100:5601","connection":"keep-alive","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36","referer":"http://192.168.99.100:5601/","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8","cookie":"NG_TRANSLATE_LANG_KEY=%22en%22"},"remoteAddress":"192.168.99.1","userAgent":"192.168.99.1","referer":"http://192.168.99.100:5601/"},"res":{"statusCode":200,"responseTime":315,"contentLength":9},"message":"GET /app/kibana 200 315ms - 9.0B"}
{"type":"response","@timestamp":"2016-03-15T09:42:07+00:00","tags":[],"pid":1,"method":"get","statusCode":404,"req":{"url":"/favicon.ico","method":"get","headers":{"host":"192.168.99.100:5601","connection":"keep-alive","user-agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36","accept":"*/*","referer":"http://192.168.99.100:5601/app/kibana","accept-encoding":"gzip, deflate, sdch","accept-language":"en-
@cbornet wait a minute do I have to enable anything manually anywhere?
Oh shit
logging:
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: false
let me try after enabling it
I enabled logging on all apps but still same dashboard and same error on logstash console
{:timestamp=>"2016-03-15T10:58:37.279000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2016-03-15T10:58:37.444000+0000", :message=>"UDP listener died", :exception=>#<IOError: closed stream>, :backtrace=>["org/jruby/RubyIO.java:3682:in `select'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.3/lib/logstash/inputs/udp.rb:77:in `udp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.3/lib/logstash/inputs/udp.rb:50:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.2-java/lib/logstash/pipeline.rb:331:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.2-java/lib/logstash/pipeline.rb:325:in `start_input'"], :level=>:warn}
@moifort was mentioning something similar about UDP connection failing in his setup
can someone try to generate docker-compose on a fresh set of apps and see if the dashboard works for you?
@deepu105, We will do that today. Don't worry if it doesn't work for you. Are you launching everything with docker ? The problem is that app launched with docker must forward to elk-elasticsearch:9200 and apps launched with maven must forward to localhost:9200. If you are not launching apps with docker you need to change the central-config/application.yml that is overwriting your config...
Thanks @PierreBesson Im launching everything with docker
here is my config
{
"generator-jhipster-docker-compose": {
"appsFolders": [
"gatewayApp",
"msapp1",
"msapp2"
],
"appConfigs": [
{
"jhipsterVersion": "2.27.0",
"baseName": "gatewayApp",
"packageName": "com.mycompany.myapp",
"packageFolder": "com/mycompany/myapp",
"serverPort": "8080",
"authenticationType": "jwt",
"hibernateCache": "hazelcast",
"clusteredHttpSession": "no",
"websocket": "no",
"databaseType": "sql",
"devDatabaseType": "h2Disk",
"prodDatabaseType": "mysql",
"searchEngine": "no",
"buildTool": "maven",
"jwtSecretKey": "2e40e15cf037a0ece7f1e2b2d93846225079899d",
"useSass": false,
"applicationType": "gateway",
"testFrameworks": [],
"enableTranslation": true,
"nativeLanguage": "en",
"languages": [
"en",
"zh-cn"
]
},
{
"jhipsterVersion": "2.27.0",
"baseName": "msapp1",
"packageName": "com.mycompany.myapp",
"packageFolder": "com/mycompany/myapp",
"serverPort": "8081",
"authenticationType": "jwt",
"hibernateCache": "hazelcast",
"databaseType": "sql",
"devDatabaseType": "h2Disk",
"prodDatabaseType": "mysql",
"searchEngine": "no",
"buildTool": "maven",
"jwtSecretKey": "2e40e15cf037a0ece7f1e2b2d93846225079899d",
"enableTranslation": true,
"applicationType": "microservice",
"testFrameworks": [],
"skipClient": true,
"skipUserManagement": true,
"nativeLanguage": "en",
"languages": [
"en",
"zh-cn"
]
},
{
"jhipsterVersion": "2.27.0",
"baseName": "msapp2",
"packageName": "com.mycompany.myapp",
"packageFolder": "com/mycompany/myapp",
"serverPort": "8082",
"authenticationType": "jwt",
"hibernateCache": "hazelcast",
"databaseType": "sql",
"devDatabaseType": "h2Disk",
"prodDatabaseType": "mysql",
"searchEngine": "no",
"buildTool": "maven",
"jwtSecretKey": "2e40e15cf037a0ece7f1e2b2d93846225079899d",
"enableTranslation": true,
"applicationType": "microservice",
"testFrameworks": [],
"skipClient": true,
"skipUserManagement": true,
"nativeLanguage": "en",
"languages": [
"en",
"zh-cn"
]
}
],
"useElk": true,
"profile": "dev",
"jwtSecretKey": "2e40e15cf037a0ece7f1e2b2d93846225079899d"
}
}
guys did any one try out the above config?
I'm trying this @deepu105 :)
@deepu105 it works fine for me. I created a docker-compose-config
directory beside the three others app, ran the generator there and launched the registry first, then the rest of the apps.
The only issue i had is that the fields name in Kibana dashboards were incorrect, instead of instance_name.raw
it was instance_name
but after changing this it works nicely.
any other things you did, I mean we have to enable logs to logstash in gateway and apps .yml files rite? did you run in dev profile or prod can you put the working setup in a git repo and share so that I can try to run it as it is and see if it works?
Thanks & Regards, Deepu
On Thu, Mar 17, 2016 at 5:29 PM, William Marques notifications@github.com wrote:
@deepu105 https://github.com/deepu105 it works fine for me. I created a docker-compose-config directory beside the three others app, ran the generator there and launched the registry first, then the rest of the apps. The only issue i had is that the fields name in Kibana dashboards were incorrect, instead of instance_name.raw it was instance_name but after changing this it works nicely.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jhipster/generator-jhipster-docker-compose/issues/3#issuecomment-197787124
I'm doing this
Thanks man ill try this out
Thanks & Regards, Deepu
On Thu, Mar 17, 2016 at 11:07 PM, William Marques notifications@github.com wrote:
@deepu105 https://github.com/deepu105 check this if it works here https://github.com/wmarques/jhipster-docker-compose-demo
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jhipster/generator-jhipster-docker-compose/issues/3#issuecomment-197924631
Ok I tried this
I generated the app configs you gave in same folder structure I built the docker images for those I generated docker files using the latest docker sub gen I deployed them using docker compose
But logstash still fails with the same missing config error. Could this be windows specific issue?
Error: No config files found: /config-dir/logstash.conf
Can you make sure this path is a logstash config file?
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.
@deepu105 : what is the path of your project ? Are you sure the volume is mount ? If not, as you are under Windows, can you try to put your projects at C:\Users\YourLogin\ plz ?
I got a computer with Windows7 at home now. I will be able to try it this WE.
lets track it in main repo https://github.com/jhipster/generator-jhipster/issues/3221
when doing
docker-compose up -d
the elk_logstash_1 container fails with below error