TracksApp / tracks

Tracks is a GTD™ web application, built with Ruby on Rails
https://www.getontracks.org/
GNU General Public License v2.0
1.18k stars 538 forks source link

Tracks 2.6.1 Container won't start after it has ran once #2844

Open john2exonets opened 1 year ago

john2exonets commented 1 year ago

You can start the Tracks 2.6.1 Container just once.....after that, it won't start and a 'docker logs tracks' dump will show this error when you try and start it again:

=> Booting Puma
=> Rails 6.0.5.1 application starting in production
=> Run `rails server --help` for more startup options
A server is already running. Check /app/tmp/pids/server.pid.
Exiting
ZeiP commented 1 year ago

Which installation method and which commands are you using?

john2exonets commented 1 year ago

i am following the Installation guide from this repo: https://github.com/TracksApp/tracks/blob/master/doc/installation.md

My DB install script:

docker run -d -p 3306:3306 --name tracks-db -e MYSQL_ROOT_PASSWORD=blank123 -d mariadb

My DB Setup script:

docker run --link tracks-db:db --rm -t -e "DATABASE_PASSWORD=blank123" -e "DATABASE_TYPE=mysql2" -e "DATABASE_PORT=3306" tracksapp/tracks:2.6.1 bin/rake db:reset --trace

My Tracks install script:

docker run -d -p 3000:3000 --name tracks --link tracks-db:db -t tracksapp/tracks:2.6.1

cosmoneer commented 1 year ago

I'm having the same problem with a Docker container running on UNRAID 6.9.2, using the UNRAID Community Apps feature to install it. It appears to be using this repository: https://hub.docker.com/r/tracksapp/tracks

This is the error I get: => Booting Puma => Rails 6.0.4.6 application starting in production => Run rails server --help for more startup options A server is already running. Check /app/tmp/pids/server.pid.

There is an installation note with the container, which reads:

NOTE: After installing, you must console into the container and run the following command to initialize the database first! rake db:reset After running that command, you should see the database has tables in it and the app should be usable at that point.

I successfully executed this command and have not had any issues. A reboot of the host server will restore functionality, but only so long as the container is not stopped.

blacktav commented 1 year ago

I am also experiencing this problem. I am using

The container can be created fine but after stopping and attempting to restart, the following is logged:

=> Booting Puma
=> Rails 6.0.5.1 application starting in production 
=> Run `rails server --help` for more startup options
A server is already running. Check /app/tmp/pids/server.pid.

Obviously I cannot reinitialise my database as @cosmoneer suggests When the container is first run-up, it logs this...

=> Booting Puma
=> Rails 6.0.5.1 application starting in production 
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Puma version: 6.0.0 (ruby 2.7.7-p221) ("Sunflower")
*  Min threads: 5
*  Max threads: 5
*  Environment: production
*          PID: 9
* Listening on http://0.0.0.0:3000
Use Ctrl-C to stop
blacktav commented 1 year ago

The container can be reliably rebuilt at anytime using the existing database. It is just the start/stop cycle do not function

nrybowski commented 1 year ago

I had the same issue with the /app folder being a volume mounted from the host in the container. Removing the file /app/tmp/pids/server.pid before restarting the container solves the issue.

LeeThompson commented 2 months ago

Yup this happens to me too. The container version should probably not check to see if the server is running like this at all. Especially since /app is mounted in a docker controlled volume, not a volume mapped to the host in the default case.

ZeiP commented 2 months ago

It indeed seems that for some reason the server doesn't remove the pid file when exiting. This is likely because of a failed exit; in a local environment this is easy to workaround just by removing the tmp/pids/server.pid file before restarting, but need to figure out why it's not exiting properly.

LeeThompson commented 2 months ago

Actually in a container it's not that easy to remove it especially if docker is running on an appliance (NAS, etc). It may be best to have the container version not check the pid at all. Currently, when it sees the pid is place the container fatals out making it difficult (unless you're really familiar with docker tools) to remove the pid from the volume since you can't attach a terminal to the container (since it's no longer running).

(I ended up working around this by mounting the pid file on the host file system and writing a bash script to remove it (if present) on system startup.)

Mostly this mechanism just make it harder to start Tracks if there is an abnormal shutdown or if it doesn't clean up the pid.