samtecspg / articulate

A platform for building conversational interfaces with intelligent agents (chatbots)
http://spg.ai/projects/articulate/
Apache License 2.0
598 stars 158 forks source link

Articulate 0.20 doesn't start with docker-compose up #558

Closed pksvv closed 5 years ago

pksvv commented 5 years ago

Issues

If you're having trouble with Articulate we definitely want to help you. If you can provide a little bit of information it will make our troubleshooting job easier.

Articulate 0.20 doesn't start with docker-compose up, localhost:3000 keeps on loading

Please describe the issue you are having here and answer the question below.

docker-compose up log:

Creating 0202_elasticsearch_1 ... done
Creating 0202_ui_1            ... done
Creating 0202_rasa_1          ... done
Creating 0202_duckling_1      ... done
Creating 0202_redis_1         ... done
Creating 0202_api_1           ... done
Attaching to 0202_elasticsearch_1, 0202_duckling_1, 0202_rasa_1, 0202_ui_1, 0202_redis_1, 0202_api_1
elasticsearch_1  | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
duckling_1       | no port specified, defaulting to port 8000
rasa_1           | 2019-02-08 07:41:31+0000 [-] Log opened.
elasticsearch_1  | [2019-02-08T07:41:30,631][WARN ][o.e.c.l.LogConfigurator  ] [unknown] Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
elasticsearch_1  |   /usr/share/elasticsearch/config/log4j2.properties
ui_1             | yarn run v1.13.0
duckling_1       | Listening on http://0.0.0.0:8000
redis_1          | 1:C 08 Feb 07:41:27.585 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1          | 1:C 08 Feb 07:41:27.586 # Redis version=4.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1          | 1:C 08 Feb 07:41:27.586 # Configuration loaded
rasa_1           | 2019-02-08 07:41:31+0000 [-] Site starting on 5000
elasticsearch_1  | [2019-02-08T07:41:31,554][INFO ][o.e.e.NodeEnvironment    ] [OOD6ARr] using [1] data paths, mounts [[/usr/share/elasticsearch/data (osxfs)]], net usable_space [353gb], net total_space [465.6gb], types [fuse.osxfs]
redis_1          | 1:M 08 Feb 07:41:27.589 * Running mode=standalone, port=6379.
redis_1          | 1:M 08 Feb 07:41:27.589 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
ui_1             | $ env-cmd .env node server
rasa_1           | 2019-02-08 07:41:31+0000 [-] Starting factory <twisted.web.server.Site object at 0x7fb1fd7cba58>
redis_1          | 1:M 08 Feb 07:41:27.589 # Server initialized
elasticsearch_1  | [2019-02-08T07:41:31,555][INFO ][o.e.e.NodeEnvironment    ] [OOD6ARr] heap size [494.9mb], compressed ordinary object pointers [true]
ui_1             | Server started ! ✓
elasticsearch_1  | [2019-02-08T07:41:31,639][INFO ][o.e.n.Node               ] [OOD6ARr] node name derived from node ID [OOD6ARrLRk2ZWpuJSUNIdg]; set [node.name] to override
ui_1             | 
ui_1             | Access URLs:
ui_1             | -----------------------------------
ui_1             | Localhost: http://localhost:3000
ui_1             |       LAN: http://172.21.0.4:3000
ui_1             | -----------------------------------
ui_1             | Press CTRL-C to stop
ui_1             |     
redis_1          | 1:M 08 Feb 07:41:27.590 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
elasticsearch_1  | [2019-02-08T07:41:31,639][INFO ][o.e.n.Node               ] [OOD6ARr] version[6.5.1], pid[1], build[default/tar/8c58350/2018-11-16T02:22:42.182257Z], OS[Linux/4.9.93-linuxkit-aufs/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
redis_1          | 1:M 08 Feb 07:41:27.592 * DB loaded from append only file: 0.003 seconds
redis_1          | 1:M 08 Feb 07:41:27.592 * Ready to accept connections
ui_1             | webpack built 81cdbffc1702f9879db1 in 10608ms
elasticsearch_1  | [2019-02-08T07:41:31,639][INFO ][o.e.n.Node               ] [OOD6ARr] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.jUt8NH92, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
elasticsearch_1  | [2019-02-08T07:41:33,486][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [aggs-matrix-stats]
elasticsearch_1  | [2019-02-08T07:41:33,486][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [analysis-common]
elasticsearch_1  | [2019-02-08T07:41:33,486][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [ingest-common]
elasticsearch_1  | [2019-02-08T07:41:33,486][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [lang-expression]
elasticsearch_1  | [2019-02-08T07:41:33,486][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [lang-mustache]
elasticsearch_1  | [2019-02-08T07:41:33,487][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [lang-painless]
elasticsearch_1  | [2019-02-08T07:41:33,487][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [mapper-extras]
elasticsearch_1  | [2019-02-08T07:41:33,487][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [parent-join]
elasticsearch_1  | [2019-02-08T07:41:33,487][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [percolator]
elasticsearch_1  | [2019-02-08T07:41:33,488][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [rank-eval]
elasticsearch_1  | [2019-02-08T07:41:33,488][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [reindex]
elasticsearch_1  | [2019-02-08T07:41:33,488][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [repository-url]
elasticsearch_1  | [2019-02-08T07:41:33,488][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [transport-netty4]
elasticsearch_1  | [2019-02-08T07:41:33,488][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [tribe]
elasticsearch_1  | [2019-02-08T07:41:33,488][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-ccr]
elasticsearch_1  | [2019-02-08T07:41:33,488][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-core]
elasticsearch_1  | [2019-02-08T07:41:33,489][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-deprecation]
elasticsearch_1  | [2019-02-08T07:41:33,489][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-graph]
elasticsearch_1  | [2019-02-08T07:41:33,489][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-logstash]
elasticsearch_1  | [2019-02-08T07:41:33,489][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-ml]
elasticsearch_1  | [2019-02-08T07:41:33,490][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-monitoring]
elasticsearch_1  | [2019-02-08T07:41:33,490][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-rollup]
elasticsearch_1  | [2019-02-08T07:41:33,490][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-security]
elasticsearch_1  | [2019-02-08T07:41:33,490][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-sql]
elasticsearch_1  | [2019-02-08T07:41:33,490][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-upgrade]
elasticsearch_1  | [2019-02-08T07:41:33,490][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded module [x-pack-watcher]
elasticsearch_1  | [2019-02-08T07:41:33,491][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded plugin [ingest-geoip]
elasticsearch_1  | [2019-02-08T07:41:33,491][INFO ][o.e.p.PluginsService     ] [OOD6ARr] loaded plugin [ingest-user-agent]
elasticsearch_1  | [2019-02-08T07:41:36,953][INFO ][o.e.d.DiscoveryModule    ] [OOD6ARr] using discovery type [single-node] and host providers [settings]
elasticsearch_1  | [2019-02-08T07:41:37,592][INFO ][o.e.n.Node               ] [OOD6ARr] initialized
elasticsearch_1  | [2019-02-08T07:41:37,592][INFO ][o.e.n.Node               ] [OOD6ARr] starting ...
elasticsearch_1  | [2019-02-08T07:41:37,803][INFO ][o.e.t.TransportService   ] [OOD6ARr] publish_address {172.21.0.2:9300}, bound_addresses {0.0.0.0:9300}
elasticsearch_1  | [2019-02-08T07:41:37,886][INFO ][o.e.h.n.Netty4HttpServerTransport] [OOD6ARr] publish_address {172.21.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1  | [2019-02-08T07:41:37,887][INFO ][o.e.n.Node               ] [OOD6ARr] started
elasticsearch_1  | [2019-02-08T07:41:38,252][INFO ][o.e.l.LicenseService     ] [OOD6ARr] license [2a21167d-123c-477d-bfb1-d3a11a7f5561] mode [basic] - valid
elasticsearch_1  | [2019-02-08T07:41:38,253][INFO ][o.e.g.GatewayService     ] [OOD6ARr] recovered [1] indices into cluster_state
elasticsearch_1  | [2019-02-08T07:41:38,951][INFO ][o.e.c.r.a.AllocationService] [OOD6ARr] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[document][4]] ...]).
api_1            | yarn run v1.13.0
api_1            | $ nodemon server --exec babel-node
api_1            | [nodemon] 1.18.4
api_1            | [nodemon] reading config ./package.json
api_1            | [nodemon] to restart at any time, enter `rs`
api_1            | [nodemon] or send SIGHUP to 18 to restart
api_1            | [nodemon] ignoring: ./node_modules/**/*
api_1            | [nodemon] watching: *.*
api_1            | [nodemon] watching extensions: js,mjs,json
api_1            | [nodemon] starting `babel-node server`
api_1            | [nodemon] spawning
api_1            | [nodemon] child pid: 30
api_1            | [nodemon] watching 282 files

(Edited by @milutz for mono-formatting)

@wrathagom

Feature Request

If you've got a feature request we want to hear it

Please desribe the feature you would like us to add.

If possible provide the use case for the feature and how you would use it.

milutz commented 5 years ago

@gaurvipul I just did a fresh pull and checkout v0.20.0 - my startup log looks very similar to yours - nothing jumps out to me as a problem.

Could you capture a docker ps and copy it in here and verify your web browser is running on the same machine as your docker-compose up?

your docker ps should look something like this:

% docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS                    PORTS                              NAMES
8aae1988b495        samtecspg/articulate-api:0.20.0                       "yarn start"             17 minutes ago      Up 17 minutes             0.0.0.0:7500->7500/tcp             articulate_api_1
bd335d8e6d2d        redis:4.0.6-alpine                                    "docker-entrypoint.s…"   17 minutes ago      Up 17 minutes             0.0.0.0:6379->6379/tcp             articulate_redis_1
743d60db5387        samtecspg/articulate-rasa:0.20.0                      "./entrypoint.sh sta…"   17 minutes ago      Up 17 minutes             0.0.0.0:5000->5000/tcp             articulate_rasa_1
963285ed7805        samtecspg/articulate-ui:0.20.0                        "yarn start"             17 minutes ago      Up 17 minutes             0.0.0.0:3000->3000/tcp             articulate_ui_1
5c9393ba4d46        samtecspg/duckling:0.1.6.0                            "/bin/sh -c 'stack e…"   17 minutes ago      Up 17 minutes             0.0.0.0:8000->8000/tcp             articulate_duckling_1
5d96223d2b0b        docker.elastic.co/elasticsearch/elasticsearch:6.5.1   "/usr/local/bin/dock…"   17 minutes ago      Up 17 minutes (healthy)   0.0.0.0:9200->9200/tcp, 9300/tcp   articulate_elasticsearch_1
milutz commented 5 years ago

@gaurvipul Could you also try accessing the URL from another browser and/or curl/wget to to make make sure this isn't some odd policy with your browser

... and ... could you open up your javascript terminal and see if anything obvious is stuck?

pksvv commented 5 years ago

@gaurvipul Could you also try accessing the URL from another browser and/or curl/wget to to make make sure this isn't some odd policy with your browser

... and ... could you open up your javascript terminal and see if anything obvious is stuck?

@milutz

It looks like the application doesn't launch once I am logged into VPN, i am able to launch it without VPN connection

Some CORS issue, however it did not happen with 0.13.0

Here is the dump:

workbox Welcome to Workbox!
webpack-internal:///./node_modules/webpack-hot-middleware/client.js?reload=true:92 [HMR] connected
20.0.0.0:7500/settings:1 Failed to load resource: the server responded with a status of 403 (Forbidden)
localhost/:1 Failed to load http://0.0.0.0:7500/settings: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
20.0.0.0:7500/agent:1 Failed to load resource: the server responded with a status of 403 (Forbidden)
localhost/:1 Failed to load http://0.0.0.0:7500/agent: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
DevTools failed to parse SourceMap: http://localhost:3000/static/app/4.14.0-48dcb435/workbox-v3.2.0/workbox-strategies.dev.js.map
DevTools failed to parse SourceMap: http://localhost:3000/static/app/4.14.0-48dcb435/workbox-v3.2.0/workbox-sw.js.map
DevTools failed to parse SourceMap: http://localhost:3000/static/app/4.14.0-48dcb435/workbox-v3.2.0/workbox-core.dev.js.map
DevTools failed to parse SourceMap: http://localhost:3000/static/app/4.14.0-48dcb435/workbox-v3.2.0/workbox-precaching.dev.js.map
DevTools failed to parse SourceMap: http://localhost:3000/static/app/4.14.0-48dcb435/workbox-v3.2.0/workbox-cache-expiration.dev.js.map
DevTools failed to parse SourceMap: http://localhost:3000/static/app/4.14.0-48dcb435/workbox-v3.2.0/workbox-routing.dev.js.map
pksvv commented 5 years ago

@gaurvipul I just did a fresh pull and checkout v0.20.0 - my startup log looks very similar to yours - nothing jumps out to me as a problem.

Could you capture a docker ps and copy it in here and verify your web browser is running on the same machine as your docker-compose up?

your docker ps should look something like this:

% docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS                    PORTS                              NAMES
8aae1988b495        samtecspg/articulate-api:0.20.0                       "yarn start"             17 minutes ago      Up 17 minutes             0.0.0.0:7500->7500/tcp             articulate_api_1
bd335d8e6d2d        redis:4.0.6-alpine                                    "docker-entrypoint.s…"   17 minutes ago      Up 17 minutes             0.0.0.0:6379->6379/tcp             articulate_redis_1
743d60db5387        samtecspg/articulate-rasa:0.20.0                      "./entrypoint.sh sta…"   17 minutes ago      Up 17 minutes             0.0.0.0:5000->5000/tcp             articulate_rasa_1
963285ed7805        samtecspg/articulate-ui:0.20.0                        "yarn start"             17 minutes ago      Up 17 minutes             0.0.0.0:3000->3000/tcp             articulate_ui_1
5c9393ba4d46        samtecspg/duckling:0.1.6.0                            "/bin/sh -c 'stack e…"   17 minutes ago      Up 17 minutes             0.0.0.0:8000->8000/tcp             articulate_duckling_1
5d96223d2b0b        docker.elastic.co/elasticsearch/elasticsearch:6.5.1   "/usr/local/bin/dock…"   17 minutes ago      Up 17 minutes (healthy)   0.0.0.0:9200->9200/tcp, 9300/tcp   articulate_elasticsearch_1

no problem w docker containers

docker ps -a CONTAINER ID IMAGE COMMAND 1efa7de7665b samtecspg/articulate-api:0.20.3 "yarn start" d3fb3240211c samtecspg/articulate-rasa:0.20.3 "./entrypoint.sh sta…" 5f6c8a11d63f docker.elastic.co/elasticsearch/elasticsearch:6.5.1 "/usr/local/bin/dock…" 3a1387812a3a samtecspg/articulate-ui:0.20.3 "yarn start" 0f0b39b99eef redis:4.0.6-alpine "docker-entrypoint.s…" 245465fc97f0 samtecspg/duckling:0.1.6.0 "/bin/sh -c 'stack e…"

wrathagom commented 5 years ago

@gaurvipul I am a little confused by the output because there are two IPs showing up for accessing the API 0.0.0.0 and 20.0.0.0 are you trying to run this locally on your Mac or on some server your company has? If it's locally on your Mac then no VPN should be involved.

Can you try to run the below:

docker-compose down
SWAGGER_HOST=localhost:7500 docker-compose up
pksvv commented 5 years ago

@gaurvipul I am a little confused by the output because there are two IPs showing up for accessing the API 0.0.0.0 and 20.0.0.0 are you trying to run this locally on your Mac or on some server your company has? If it's locally on your Mac then no VPN should be involved.

Can you try to run the below:

docker-compose down
SWAGGER_HOST=localhost:7500 docker-compose up

@wrathagom This has resolved the issue, thanks

wrathagom commented 5 years ago

@malave FYI.