Open rskalla95 opened 2 years ago
@rskalla95 the output for userData scripts lives in /var/log/cloud-init-output.log
, so you can check there to see if your server startup script is running, and if so what errors it may be running into. Can you find that file and provide the output?
Also if you're looking to "start fresh", then the command to tear down your stack is npx cdk destroy
The first time, everything worked fine but I couldn't connect to the server via Satisfactory
This should not happen and is most probably a network issue. In a fresh deployment can you check if you can ping
the server ip address?
@feydan had an issue like this and the problem was that the vpc subnet had network rules that were blocking connections.
I had him connect to the server when this happened -- the init script didn't run in his case. We had a similar issue with someone else from reddit who tried to use the CDK script. In that other case, the init script was coped to the server but failed when it was invoked.
As @feydan said, we were able to log into the server via AWS Session Manager, copy the install script and run it manually. I don't think the auto-shutdown script is working, but the rest is working as expected and we are getting great performance on the server,
You can check the auto shutdown script with sudo systemctl status auto-shutdown
. This is important as the server is ~$62/month if it stays up the whole time. I found that you do need to close out satisfactory, as it sometimes does ping the server in the main menu.
If the service is stopped you can replace status with enable and then start to get it going.
Result of sudo systemctl status auto-shutdown
Loaded: loaded Active: inactive (dead)
When I replace Status with Enable, I get the following error:
The unit files have no installation config (WantedBy=, RequiredBy=, Also=, Alias= settings in the [Install] section, and DefaultInstance= for template units). This means they are not meant to be enabled using systemctl.
I tried simply running sudo systemctl enable auto-shutdown
, and now the service seems to be running!
Update: I re-booted the server and ran the status again, and it is once again inactive. Not sure how to set it to run on startup.
Having similar issues, auto-shutdown and the duckdns IP refresh scripts are not loading on instance launch.
If I manually start the services they work great, but not helpful if they don't launch themselves.
I see the auto-shutdown in crontab and duck is in rc2.d, not sure what's going on.
can one of you try adding this to the bottom of the auto-shutdown systemd entry located at /etc/systemd/system/auto-shutdown.service
and then try sudo systemctl enable auto-shutdown
?
You'll likely need to run sudo systemctl start auto-shutdown
to get it running after that. See if it gets started automatically after next cycle.
[Install]
WantedBy=multi-user.target
if that works ill add it to the repo
I don't know too much about duckdns -- haven't used it before.
I moved the commands into cron and they seem to be working now. Won't be able to mess around with the server for a few days, can try the proposed solution after the weekend.
Ok -- cron is generally not a good place for the auto-shutdown script as it is meant to be long lived. If cron invokes it multiple times, you will likely accumulate multiple instances of the script running. This may not necessary cause an issue right away, but I imagine at some point it would.
@feydan I added that to the end of auto-shutdown.service and didn't get any errors running enable and start. I stopped the instance and restarted it and ran systemctl status auto-shutdown and it is active and running. I'll check back in 30 minutes to verify that the server was shut down automatically. Thanks for the help!
awesome thanks for verifying @rskalla95 -- I added it to the repo: https://github.com/feydan/satisfactory-server-aws/commit/884af037282bfe3cdbea240f6a3680a40d15a160
@refactoredreality this might fix your auto shutdown service as well ^
I had him connect to the server when this happened -- the init script didn't run in his case. We had a similar issue with someone else from reddit who tried to use the CDK script. In that other case, the init script was coped to the server but failed when it was invoked.
@feydan - Do you have any more information on how you resolved this issue? I think I'm having the same problem. My log file indicates the cloud_init script wasn't able to run.
Not sure if this was just a me problem, but my log had /bin/bash^M: bad interpreter
. Changing the file formats of the two scripts to unix fixed the issue.
Not sure if this was just a me problem, but my log had
/bin/bash^M: bad interpreter
. Changing the file formats of the two scripts to unix fixed the issue.
I have the same issue. I tried changing install.sh and auto-shutdown.sh to Unix line endings on my local machine before deploying, (I assume @cmyager that's what you meant), but this didn't appear to make any difference for me. Probably I have misunderstood.
Edit: I solved my issue. I think the problem was I had already deployed once already, and the script within the private S3 bucket wasn't replaced when I re-deployed after editing the script files on my machine.
So my solution ultimately was to convert the copy of the script on the S3 (something like a7... .sh) to be Unix format and re-uploaded manually in the same location and same file name.
Ah that makes sense. I'm not sure of the best way to solve this issue. As a manual workaround, this did work for someone I was chatting with on Reddit.
sudo su -
- become rootwget https://raw.githubusercontent.com/feydan/satisfactory-server-aws/main/server-hosting/scripts/install.sh
- download the install script from githubchmod +x install.sh
- make the install file executable./install.sh
- run the install scriptIt's not very sustainable since you'd have to do it again if you tear down your deployment and re-deploy.
Ah that makes sense. I'm not sure of the best way to solve this issue. As a manual workaround, this did work for someone I was chatting with on Reddit.
sudo su -
- become rootwget https://raw.githubusercontent.com/feydan/satisfactory-server-aws/main/server-hosting/scripts/install.sh
- download the install script from githubchmod +x install.sh
- make the install file executable./install.sh
- run the install scriptIt's not very sustainable since you'd have to do it again if you tear down your deployment and re-deploy.
This worked for me. Everything working fine now, exept for the auto backup of the save files. Going to dig into that issue next week.
Running the install script manually worked as @MikeConsemulder reported. With the exception of the autobackup. This is due to the crontab command not working.
crontab: usage error: only one operation permitted
This is the crontab command in question.
su - ubuntu -c "crontab -l -e ubuntu | { cat; echo \"*/5 * * * * /usr/local/bin/aws s3 sync /home/ubuntu/.config/Epic/FactoryGame/Saved/SaveGames/server s3://$S3_SAVE_BUCKET\"; } | crontab -"
Unfortunately, I am not skilled enough to know what the hell is going on in there besides that you are listing and editing the ubuntu users crontab and piping in a command to printing the savegames folder data into the s3 bucket for save backups? I think the fact that the command is listing and editing the ubuntu users crontab might be the issue? Since it's trying to do two separate operations?
@Abbrahan this is likely because the install script was expecting a parameter representing the bucket name that the save files get uploaded to (you can see in the command above as the $S3_SAVE_BUCKET variable). You should be able to log in to your server, switch to the ubuntu user (su - ubuntu
), edit crontab (crontab -e
) and then replace $S3_SAVE_BUCKET with your bucket path (that was originally created by the cdk script).
Appreciate all the help here - had similar issues with the install script not running and the auto shutdown not working (same cron config error). Manually ran the install script and then edited crontab with the bucket link (I hope I've set it right - they mentioned it was s3:// and then the bucket name)
Looks like the services all run perfectly now!
Not sure if this was just a me problem, but my log had
/bin/bash^M: bad interpreter
. Changing the file formats of the two scripts to unix fixed the issue.
Ran into the same issue. Pretty sure it was the git "feature" on Windows that transforms all files to CLRF...... So if you clone on a windows machine, make sure to either disable auto-clrf or manually change them back to LF before deploying
I'm not sure exactly what the problem is, but I've tried 3 times now and can't seem to get things working. The first time, everything worked fine but I couldn't connect to the server via Satisfactory. I logged into the server via "EC2 Instance Connect", and the /home/ubuntu directory was completely empty. I thought maybe it had something to do with loading the script with windows line endings, and I tried to account for that, attempting to re-install everything two more times. I still can't connect to the server via Satisfactory, and now EC2 Instance Connect isn't working either. I might have messed something up the second/third time, as I wasn't sure exactly how to "start fresh"