tlaverdure / laravel-echo-server

Socket.io server for Laravel Echo
MIT License
2.64k stars 509 forks source link

Can't specify location of laravel-echo-server.lock to avoid "container_n exited with code 143" (can't run multiple Docker containers) #554

Open zmorris opened 3 years ago

zmorris commented 3 years ago

Describe the bug On a vanilla Docker/Laravel setup, there are usually 1 or more containers sharing their web directory. Additional containers can be spun up as needed, to scale on sites like AWS. The problem is that additional containers try to write laravel-echo-server.lock, which already exists, so they exit with code 143. The --force option would cause the lock to get deleted and recreated, which is undesirable (defeats the purpose of the lock file).

To Reproduce Steps to reproduce the behavior: Run 2 or more Docker containers running laravel-echo-server and see that the second one fails.

Expected behavior We should be able to specify the location of laravel-echo-server.lock with a CLI argument like --lockfile ../laravel-echo-server.lock so that we can save the lock file in a different location than beside laravel-echo-server.json in the shared directory. That way the containers can run individually, share their html files, but have their own local laravel-echo-server.lock file in a parent directory, hardcoded path or somewhere like /tmp.

Additional context This is going to happen to anyone who tries to run more than 1 container with each running laravel-echo-server on AWS, Heroku or Google Cloud. Creative solutions aren't going to work here, since the easiest workflow is to create an image containing Laravel, laravel-echo-server, and Nginx. Then the load balancer spins up additional containers as needed, directing requests across the new instances. Trying to run multiple containers, but a single WebSocket, causes implementation details to spill over into the AWS infrastructure. This is a common problem with all libraries that use a lock file, so I'm hoping to find a workaround in the interim.