stevejenkins / pihole-cloudsync

Syncs blocklists, blacklists, and whitelists across multiple Pi-holes using a private GitHub repo
MIT License
515 stars 105 forks source link

Possible to shrink gravity.db before pushing? #15

Closed npburns224 closed 3 years ago

npburns224 commented 4 years ago

My gravity.db file is just over 200mb and, after spending the day setting up Git LFS on my devices, I’m now nearing my monthly data cap for the service.

Would it be possible to compare the gravity.db file currently in use with the one stored in the local repo and delete whatever is redundant, then push that shrunken file to the remote repo? Would the secondary piholes then be able to use that shrunken file to fill in the gaps of their own local gravity.db files?

subodhgupta commented 4 years ago

I modified the script that Steve created to use a SQL export of all tables other than gravity table within the gravity database. That way the export db is only a few KB and then trigger a "pihole -g" on the secondary pihole. Although, I was trying to keep it simple so I removed a bunch of things that I didn't need from the original script. Also, this is nowhere as close to being neat and perfect as what Steve has written.

https://github.com/subodhgupta/pihole-cloudsync/blob/master/pihole-cloudsync-modified

THe SQL scripts required are:

https://github.com/subodhgupta/pihole-cloudsync/blob/master/gravity_sync_pull.sql https://github.com/subodhgupta/pihole-cloudsync/blob/master/gravity_sync_push.sql

jacklul commented 4 years ago

I modified the script that Steve created to use a SQL export of all tables other than gravity table within the gravity database. That way the export db is only a few KB and then trigger a "pihole -g" on the secondary pihole. Although, I was trying to keep it simple so I removed a bunch of things that I didn't need from the original script. Also, this is nowhere as close to being neat and perfect as what Steve has written.

https://github.com/subodhgupta/pihole-cloudsync/blob/master/pihole-cloudsync-modified

THe SQL scripts required are:

https://github.com/subodhgupta/pihole-cloudsync/blob/master/gravity_sync_pull.sql https://github.com/subodhgupta/pihole-cloudsync/blob/master/gravity_sync_push.sql

I don't think this backups whole database schema including triggers? It might create some issues.

Your approach inspired me and I did it a bit different way but it keeps the schema: https://github.com/jacklul/pihole-backup/blob/master/pihole-backup.sh#L54

In this scenario a pull would require:

pihole-FTL has to be shutdown because when database is replaced with empty gravity table it would let some domains through...

subodhgupta commented 4 years ago

@jacklul

I don't think this backups whole database schema including triggers? It might create some issues.

Not sure I understand this. Unless you have manually created new triggers on the gravity.db (which, granted can be a use case as seen here https://discourse.pi-hole.net/t/group-management-question/32062/7 ) all triggers should already exist on the secondary Pi-hole. I only use the synced db file for inserting rows on the secondary Pi-hole and it does not drop any objects.

Even with the use-case described in the above link, if the changes are made to the primary Pi-hole, the script will migrate the rows created that by the trigger and you don't, at least in practice, have to migrate the trigger itself, assuming the designation of primary and secondary Pi-hole does not change and changes are always expected to flow from primary to secondary.

Subodh.

jacklul commented 4 years ago

@jacklul

I don't think this backups whole database schema including triggers? It might create some issues.

Not sure I understand this. Unless you have manually created new triggers on the gravity.db (which, granted can be a use case as seen here https://discourse.pi-hole.net/t/group-management-question/32062/7 ) all triggers should already exist on the secondary Pi-hole. I only use the synced db file for inserting rows on the secondary Pi-hole and it does not drop any objects.

Even with the use-case described in the above link, if the changes are made to the primary Pi-hole, the script will migrate the rows created that by the trigger and you don't, at least in practice, have to migrate the trigger itself, assuming the designation of primary and secondary Pi-hole does not change and changes are always expected to flow from primary to secondary.

Subodh.

Oh, I though in addition to this new database is created on the secondary!

stevejenkins commented 3 years ago

New version no longer stores the database, so closing this one out.