dsys / match

:crystal_ball: Scalable reverse image search built on Kubernetes and Elasticsearch
Other
1.24k stars 150 forks source link

WORKER TIMEOUT on application start in AWS EC2 instance #36

Open dboterho opened 5 years ago

dboterho commented 5 years ago

Hello,

We have been trying to set up an instance of the application on an AWS EC2 instance. Everything seems to be setup correctly; EC2 instance is running, we can telnet to the ports, attached volume is OK, and AWS Elasticsearch instance appears accessible and running. When we hit the /ping URL it times out. And looking at the docker logs we can see the below messages:

[2018-12-04 16:50:33 +0000] [1946] [INFO] Booting worker with pid: 1946 [2018-12-04 16:50:33 +0000] [1947] [INFO] Booting worker with pid: 1947 [2018-12-04 16:50:33 +0000] [1950] [INFO] Booting worker with pid: 1950 [2018-12-04 16:50:33 +0000] [1954] [INFO] Booting worker with pid: 1954 [2018-12-04 16:51:33 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1946) [2018-12-04 16:51:33 +0000] [1946] [INFO] Worker exiting (pid: 1946) [2018-12-04 16:51:33 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1947) [2018-12-04 16:51:33 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1950) [2018-12-04 16:51:33 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1954) [2018-12-04 16:51:33 +0000] [1954] [INFO] Worker exiting (pid: 1954) [2018-12-04 16:51:33 +0000] [1947] [INFO] Worker exiting (pid: 1947) [2018-12-04 16:51:33 +0000] [1950] [INFO] Worker exiting (pid: 1950)

We have ruled out memory and cache issues and tried using different versions. The issue appears to originate from the dsys/match image. Does anybody know what our issue is?

AMI: ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20180912 (ami-07a3bd4944eb120a0) EC2 instance type: t2.medium Elastic Search version: 6.3

Thanks

cucomans commented 5 years ago

Same problem here. Did you get with a solution?

dboterho commented 5 years ago

@cucomans no, didn't find a workround and went with an alternate solution (which didn't include phash).

cucomans commented 5 years ago

It was working fine untill last monday and then boom this error started to occurr.

lklic commented 3 years ago

I had the same problem, from the logs I was getting the following: Elasticsearch: Max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Following this stackoverflow i did the following: https://stackoverflow.com/questions/51445846/elasticsearch-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-inc

edit /etc/sysctl.conf and added vm.max_map_count = 262144 to the end of the file

ran sysctl -w vm.max_map_count=262144 then systemctl restart docker