Closed Paraphraser closed 4 years ago
@Paraphraser Thank you for the amazingly detailed feedback!
Regarding the update procedure. My concern for not using 'git pull' was that it performs a local merge and if anyone had modified any of the scripts then there could be some merging issues that would not be automatically resolved, future additions could also have unexpected behaviour as well. I will have to find a more practical method of updating the project.
I will have a look at the Pihole setup. I will try to replicate your results and see if I can find out what is the issue in the configuration. I suspect that the issue is here. The nature of docker-compose up and down is to delete the container, it gets repopulated each time it start. My suspicion is that that Pihole may not be saving the information in the volumes that they specify on their github example.
“Amazingly detailed feedback”? I think of it more as the courtesy of doing enough basic research to rise to a level of proof (as distinct from mere assertion or conjecture). My motivation is that I think IOTStack is a brilliant advance and I really want it to work, long term, set-and-forget.
I’m not sure the “if someone has modded a script” argument holds water. For example, I’ve modded the backup script (to “scp” the backup file to a local server rather than out into the cloud) but my approach was to make a copy of the script and stick it in ~/bin and it seems to me that’s a much better habit for everyone to get into rather than modding in situ.
Anyway, just my 2 cents...
I think changing it to git pull is the best option for now. I'm going to test a bare repo on my side and verify the the workflow. Once I'm happy I'll update the wiki
I did some testing locally and if you have uncommitted local files then they are unaffected by a git pull. It's only if you've done a local commit then then git will have to do a conflict resolution (something the average user wont be doing anyway). I will add a writeup in the wiki on how to do a pull and what to do if your local file has changed. Thanks for that!
I just tested adding a update to the menu and doing a git pull
from the script while the origin has additional commits and it works. I was concerned that there would be a access conflict because menu.sh was in use but it doesn't affect the pull
I'll push an update later where you can now update straight from the menu
Re: Pihole
I've modified my pihole setup to mimic your configurations and I'm not getting the same results as you. All my modifications reflect in ~/IOTstack/volumes/pihole/etc-pihole/setupVars.conf
and i have done multiple downs and ups and the settings persist.
I've preliminarily added all the environment options to the env file From here you can set the settings you are interested in. I still dont know why those setting get reset. Can you please verify the settings reflect in the setuVars.conf file
#TZ=America/Chicago
WEBPASSWORD=pihole
#DNS1=8.8.8.8
#DNS2=8.8.4.4
#DNSSEC=false
#DNS_BOGUS_PRIV=True
#CONDITIONAL_FORWARDING=False
#CONDITIONAL_FORWARDING_IP=your_router_ip_here (only if CONDITIONAL_FORWARDING=ture)
#CONDITIONAL_FORWARDING_DOMAIN=optional
#CONDITIONAL_FORWARDING_REVERSE=optional
#ServerIP=your_Pi's_IP_here << recommended
#ServerIPv6= your_Pi's_ipv6_here << Required if using ipv6
#VIRTUAL_HOST=$ServerIP
#IPv6=True
INTERFACE=eth0
#DNSMASQ_LISTENING=local
Starting point (baseline & desired end point) is:
$ cat ~/IOTstack/services/pihole/pihole.env
TZ=Australia/Sydney
WEBPASSWORD=pihole
$ docker exec -it pihole bash
# grep -r "192.168.132.65#53" /etc/*
/etc/dnsmasq.d/01-pihole.conf:server=192.168.132.65#53
/etc/pihole/setupVars.conf:PIHOLE_DNS_1=192.168.132.65#53
/etc/pihole/setupVars.conf.update.bak:PIHOLE_DNS_1=192.168.132.65#53
Change the .env file to comment-out the TZ (ie force a container rebuild)
$ cat ~/IOTstack/services/pihole/pihole.env
# TZ=Australia/Sydney
WEBPASSWORD=pihole
$ docker-compose up -d
$ docker exec -it pihole bash
# grep -r "192.168.132.65#53" /etc/*
/etc/pihole/setupVars.conf.update.bak:PIHOLE_DNS_1=192.168.132.65#53
A visual inspection of Settings > DNS in the PiHole GUI showed a return to the "Google (ECS)" pair with the "Custom 1 (IPv4)" field empty.
Next, change the .env file to restore TZ and add a DNS1 entry:
$ cat ~/IOTstack/services/pihole/pihole.env
TZ=Australia/Sydney
WEBPASSWORD=pihole
DNS1=192.168.132.65#53
$ docker-compose up -d
$ docker exec -it pihole bash
# grep -r "192.168.132.65#53" /etc/*
/etc/dnsmasq.d/01-pihole.conf:server=192.168.132.65#53
/etc/pihole/setupVars.conf:PIHOLE_DNS_1=192.168.132.65#53
While the grep output looks the same as the first two lines of the baseline output, the PiHole GUI took a different view. What it showed was:
IPv4 checkbox pair of the "Google (ECS)" group:
"Custom 1 (IPv4)" set to "192.168.132.65#53".
So, very close, but definitely no cigar. Let's try explicitly telling PiHole that there is no DNS2:
$ cat ~/IOTstack/services/pihole/pihole.env
TZ=Australia/Sydney
WEBPASSWORD=pihole
DNS1=192.168.132.65#53
DNS2=no
$ docker-compose up -d
$ docker exec -it pihole bash
# grep -r "192.168.132.65#53" /etc/*
/etc/dnsmasq.d/01-pihole.conf:server=192.168.132.65#53
/etc/pihole/setupVars.conf:PIHOLE_DNS_1=192.168.132.65#53
/etc/pihole/setupVars.conf.update.bak:PIHOLE_DNS_1=192.168.132.65#53
Again, the first two lines of grep output look right but this time the GUI also reflects the baseline. Both Google (ECS) checkboxes are off and the "Custom 1 (IPv4)" field is populated correctly.
While this solution is reliable, effective and maintainable, it still suffers from the problem that GUI changes will be silently lost any time the container is rebuilt so, in that sense, it's probably sub-optimal. What do you think?
It is actually quite an interesting issue, on the one hand you can set the DNS setup in env file the other the settings should be stored in the volume. The issue comes in that when you run the docker-compose up you overwrite all the settings in the volume.
I think for the average user who will only be using pihole as a simple ad blocker the volume option is the best. However for more advanced users the env file would be best.
I think i will revise the wiki entry with a few picture showing the differences and the pitfalls.
I'm also beginning to wonder if i should change the instructions not to use docker-compose down and in stead use docker-compose stop which will be less harsh on the containers. It will also preserve the logs of the containers
I will tell you something else that is also quite intriguing and related to this general topic of what is or isn't lost across PiHole container re-creation.
One of the things I did very early on (before I had fully focused on pihole.env as a means of transporting configuration information into a container) was to change my PiHole password by going into the container shell and typing:
pihole -a -p «my password here»
The password hash winds up in:
grep "WEBPASSWORD" ~/IOTstack/volumes/pihole/etc-pihole/setupVars.conf
WEBPASSWORD=«hash»
setupVars.conf is also one of the places where "DNS1" -> "Custom 1 (IPv4)" winds up.
We've just worked out that one of the "official" mechanisms (the Web GUI) for changing "Custom 1 (IPv4)" does not survive container re-creation.
Conversely, the password I set via another "official" mechanism (the CLI) has persisted through heaven alone knows how many container re-creations. I have never had to reset it.
I just conducted a test. I changed the password in pihole.env and re-created the container. There was no effect on the hash in setupVars.conf and, as you'd infer from that, the GUI password did not change.
I wondered what would happen if I cleared the password:
$ docker exec -it pihole bash
# pihole -a -p
Enter New Password (Blank for no password):
[✓] Password Removed
# exit
I also made a change to the password in pihole.env so that "docker-compose up -d" would re-create the container. The result was:
In short, the password in pihole.env had zero effect.
I wondered if a second forced re-creation would have any effect. I changed the password in pihole.env a second time and re-created the container. Hash field still null. GUI still "passwordless".
What if I edit setupVars.conf to remove the WEBPASSWORD= line and restart the container? The answer is that the value on the RHS of WEBPASSWORD= in pihole.env took effect, the GUI required login with that password, and setupVars.conf regained WEBPASSWORD=«hash».
Hmmm. Given that:
does that mean that any pihole.env value is a one-shot until setupVars.conf is hand-edited?
To which the answer is "you betcha!"
I suppose it's always possible that my initial use of "pihole -a -p" is responsible for password values in pihole.env being one-shot. If you are not able to replicate this behaviour then I'll shrug my shoulders and move on.
Of course, if you are able to replicate the behaviour then it follows that the Wiki will need some guidance on editing setupVars.conf to force a new pihole.env password to take effect (or, perhaps, forego pihole.env in favour of advising "pihole -a -p" to set/change/clear passwords).
I'll try to take a crack at the env file settings in the morning. From what I understand about parameters in the env file is that the set environmental variables similar to how you would find an "export" in your ~/.bashrc file
eg.
$ cat ~/bash.rc
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
export GOPATH=$HOME/go
export PATH=/usr/local/go/bin:$PATH:$GOPATH/bin
You set the environmental variable and Pihole might use those variables to create the setupVars.conf if it does not exist. Once it does exist then it may or may not ignores the variables until such a time as you make a substantial edit (like you did by by editing the conf file)
I also suspect that this type of behavior may vary from image to image as the software distributor implements the "saving" of variables completely differently. Some may ignore a variable it their "conf" file exists, other may overwrite the conf file every boot
This definitely warrants an explanation in the wiki on what persists and what doesn't. Because it will catch someone out down the line
I have come across another problem with PiHole. It seems to lose some of its configuration when the container is recreated.
I'm also now wondering about the Wiki advice on best practice for updating the project.
In PiHole > Settings > DNS tab, I configure like this:
"Upstream DNS Servers":
"Interface listening behavior":
Advanced DNS Settings:
Under "Conditional forwarding":
I have given chapter-and-verse for the whole tab but the discussion below really only concerns the fields mentioned under point 1 above.
Following the instructions in the Wiki, I did:
That reported no changes, so I proceeded to:
Didn't seem to do much. Next:
I selected the default services plus PiHole. For each and every overwrite option decision I chose "Pull full service from template".
Before running docker-compose, I compared a saved copy of the previous YAML file with the one the menu had just generated. The previous YAML file had one hand-edit: adding the "env_file" entry to the "nodered" grouping.
The "nodered" grouping in the new YAML file did not have the "env_file" entry so the project obviously wasn't up-to-date.
Some brute force required:
OK. Now we seem to be cooking.
Same service selections as before. Same "Pull full service from template" option for all services.
This time the YAML file comparison found no differences (ie the "env_file" entry had made it to the "nodered" grouping).
I added TZ= to both nodered.env & pihole.env and then:
We can see from this that PiHole got recreated (presumably because of the change to the .env file). When I went into PiHole's web GUI and nosed around, I found two discrepancies in the Settings > DNS tab:
The logical conclusion is that wherever this part of the PiHole's configuration is stored is not being preserved.
"docker-compose restart pihole" does not show this problem (and neither do explicit "stop" and "start" commands). I have not tried the "down" command because I do not understand the consequences of its options.
Summary