Closed mcmikemn closed 3 years ago
Nevermind. I tried it again this morning after changing nothing, and it worked.
very odd, glad it resolved itself, let me know if it happens again
To answer your question, Backup created: null
seems like it was generated by the HA CLI snapshot command:
slug=$(ha snapshots new --raw-json --name="${name}" | jq --raw-output '.data.slug')
echo "Backup created: ${slug}"
I just updated the HA CLI version to solve an issue with snapshot commands timing out, the fix is included in 2021.5.1, so maybe that will help you out
I'm now on 2021.5.2 and it's happening again: backup file being created is null.
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-banner.sh: executing...
-----------------------------------------------------------
Add-on: Remote Backup
Automatically create and backup HA snapshots using SCP
-----------------------------------------------------------
Add-on version: 2021.5.2
You are running the latest version of this add-on.
System: Home Assistant OS 5.13 (aarch64 / raspberrypi4-64)
Home Assistant Core: 2021.6.0
Home Assistant Supervisor: 2021.05.4
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
[cont-init.d] 00-banner.sh: exited 0.
[cont-init.d] 01-log-level.sh: executing...
[cont-init.d] 01-log-level.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Adding SSH key
Creating local backup: "hassio-backup- 2021-06-02 20-34"
Backup created: null
Copying null.tar to /mypath on 192.168.1.8 using SCP
Warning: Permanently added '192.168.1.8' (ECDSA) to the list of known hosts.
null.tar: No such file or directory
[cmd] /run.sh exited 1
[cont-finish.d] executing container finish scripts...
[cont-finish.d] 99-message.sh: executing...
-----------------------------------------------------------
Oops! Something went wrong.
We are so sorry, but something went terribly wrong when
starting or running this add-on.
Be sure to check the log above, line by line, for hints.
-----------------------------------------------------------
[cont-finish.d] 99-message.sh: exited 0.
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
That's very strange, I cannot recreate, is your custom prefix still the same? What happens if you run it a second time?
I ran it 3 times before posting above, all failed. I ran it once more after, failed. Also, it's been set as an automation triggered to run nightly for the past couple weeks and has only run twice.
However, I just ran it again and it successfully made a backup file this time, and successfully SCPed it, and successfully deleted all extra backups (more than the keep_local_backup amount).
Weird.
My custom_prefix has not changed: "hassio-backup-" (by the way, you might consider trimming white space out of the file name, so the result might be "hassio-backup-2021-06-0220-52.tar" instead of "hassio-backup- 2021-06-02 20-52.tar")
My custom_prefix has not changed: "hassio-backup-" (by the way, you might consider trimming white space out of the file name, so the result might be "hassio-backup-2021-06-0220-52.tar" instead of "hassio-backup- 2021-06-02 20-52.tar")
I have been thinking of possibly trimming out the whitespace and just making that part of the default prefix, I wonder if you were to change your custom prefix to one word if that would make a difference
I ran it 3 times before posting above, all failed. I ran it once more after, failed. Also, it's been set as an automation triggered to run nightly for the past couple weeks and has only run twice.
Try downloading the SSH addon and running the following commands:
name="your-name-here"
ha snapshots new --raw-json --name="${name}" | jq --raw-output '.data.slug'
see if you get any errors as it seems the ha snapshots command is returning null for you
It worked fine. And the automation scheduled to run last night also worked fine. So odd, lIke when I'm watching it, it works fine. :)
are you running HA off an SD Card? it could be dying or may not be high speed enough depending on your setup, you need a minimum of a class 10 SD card for a Raspberry Pi to function correctly
also, do you have a proper power supply for the Pi?
I have seen on my Pi's if they are not connected to the proper power supply they sometimes still work but overtime things become corrupt and CPU intensive tasks like taking a snapshot may time out
If neither of those things are the problem I would recommend, trying to repair the supervisor with the command
ha su repair
Furthermore check your database file, it may be too large causing it to take a long time to snapshot which nay be causing a timeout, you may want to delete it and restart HA to see if that fixes anything
I am running HA on a Pi off an SD card. It's a Pi 4 with correct power supply and is only about 2 months old, and the SD card is class 10.
I don't think this is a timing-out problem as the "null" backup is created and the script fails in 1.5 seconds. I'll look at su repair
.
I don't think this is a timing-out problem as the "null" backup is created and the script fails in 1.5 seconds. I'll look at
su repair
.
That raises even more questions
Is it created with a name?
How does it show in the UI?
Is it a complete backup?
If it is a complete backup and it takes longer than the backup script runs to complete the snapshot then that suggests its the CLI timing out, which they have been adjusting over the past few releases and was considered fixed in CLI 4.12.2, which I released with 2021.5.1, I myself was having trouble with it so I held back on 4.12 till 4.12.2 was released
Is what created with a name? I'm calling it a "null" backup because it's supposed to be a backup file but the log says that it created "null", and indeed nothing gets created.
When the addon works, it does create a backup file, the file does have a name (what file doesn't have a name?), I think it's a complete backup (in the Supervisor -> Snapshots GUI, it's listed as a "Full Snapshot"). Sometimes it takes a long time to create the backup, but those times it works. When the error happens, it's almost instantly.
Here's my addon config:
ssh_enabled: true
friendly_name: true
custom_prefix: hassio-backup-
ssh_host: 192.168.1.8
ssh_port: 22
ssh_user: my-user
ssh_key: id_rsa
remote_directory: /my-path
zip_password: ''
keep_local_backup: '5'
rsync_enabled: false
rsync_host: ''
rsync_rootfolder: hassio-sync
rsync_user: ''
rsync_password: ''
rclone_enabled: false
rclone_copy: false
rclone_sync: false
rclone_restore: false
rclone_remote: ''
rclone_remote_directory: ''
When the error happens, it's almost instantly.
But is the snapshot available for download instantly? Is it corrupt?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi. I'm getting an error during startup, which is probably due to my misconfiguration but I can't tell from the log:
I see
Backup created: null
and think that's what failed, but why? Where is the backup file supposed to be so I can look if it got created?Here's my config:
Let me know if I can provide any other info to help you help me. Thanks for any suggestions.