Closed Paraphraser closed 3 years ago
Hey! thanks for opening to follow up! I'm going to take a moment to look at everything here so I can respond. If you can, let's chat on discord!
I had already planned to do a video on a bunch of these things this morning, so instead of saving it and waiting for the edited version, i decided to livestream it and include timestamps because several of these pieces are relevant.
This video is not meant to be my primary response to your questions, but I hope that several pieces can help further our discussions. I'll still be going through and responding to each item in your report as needed till we tackle them all.
Here's the YouTube video: https://youtu.be/eKboXA0CBvo
The parts I hope you can spend a few minutes checking out for me start at 1:12:45. Specific timestamps for demo/discussion about /dev/tty, using cli to reset user passwords/add users are in the video description.... but really I'd like your thoughts on anything after 1:20:00 to the end of the video (about the last 25 mins). those bits are especially relevant to things like IoT stack and octofarm, since they discuss preconfiguration and decoupling usb devices.
All of those bits are the parts I need the most feedback on from systems like those to help me figure out what may need to change in the image, or what the missing documentation or features are.
I'll be responding more to specific issues you brought up over the next 48 hours or so, as some of them may be more than questions and might need to be logged as bugs or features as you and I work together to see what works differently between our systems or why things work on my test farm that don't work for the systems you test on.
Sorry for the delay. I'm not ignoring you. Had other commitments today. Will get onto it.
Totally understand! That's how OSS works.
Spoiler alert: problem solved!
After watching your video, my first thought was that the explanation might lie in the fact that you were mapping port 80 to port 80 while I was mapping port 9980.
That was a blind alley but it helped me to discover the actual problem.
These two webcam suffixes I was trying in the earlier testing were wrong:
/webcam
(from commit comment 51240522)/webcam?action=stream
(from commit comment 51240575)The correct suffix is:
/webcam/?action=stream
I've been kicking myself because I already "knew" this. My own notes include:
Stream URL: /webcam/?action=stream
Snapshot URL: http://localhost:8080/?action=snapshot
Path to FFMPEG: /usr/bin/ffmpeg
I just didn't spot the missing "/" used throughout the earlier testing because it's a bit too subtle, or I'm a bit too blind, or one ant short of a picnic, or some combo of all three.
Anyway, with:
octoprint:
container_name: octoprint
image: octoprint/octoprint
restart: unless-stopped
environment:
- TZ=Australia/Sydney
- ENABLE_MJPG_STREAMER=true
- MJPG_STREAMER_INPUT=-r 1152x648 -f 10
- CAMERA_DEV=/dev/video0
ports:
- "9980:80"
devices:
- /dev/ttyAMA0:/dev/ttyACM0
- /dev/video0:/dev/video0
volumes:
- ./volumes/octoprint:/octoprint
Test:
$ curl -G http://127.0.0.1:9980/webcam/?action=stream
--boundarydonotcross
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
Qapla!
Summary:
Only port 80 needs to be exposed and mapped (as you said)
The internal configuration (Settings » Webcam & Timelapse) is:
/webcam/?action=stream
http://localhost:8080/?action=snapshot
/usr/bin/ffmpeg
URLs for reaching the camera from outside the container are:
http://raspberrypi.local:9980/webcam/?action=stream
http://raspberrypi.local:9980/webcam/?action=snapshot
I think it's reasonably clear that the password problems were down to just the wrong commands. For IOTstack, I'm going to say words to the effect of "pick a username" then:
if you forget the username:
$ docker exec octoprint octoprint --basedir /octoprint/octoprint user list
if you forget the password:
$ docker exec octoprint octoprint --basedir /octoprint/octoprint user password --password «new password» «existing username»
$ docker-compose restart octoprint
Based on what you said in the video and my casual inspection of 71-octoprint.docker.rules, it looks to me like our approaches are sort of the inverse of each other.
You are attacking the problem by transmitting signals into the container to cause it to react, and giving it permission to do what needs to be done.
If it were me, rather than trying to shoehorn everything into the udev rule, I think I'd be defining a script which got added to the container at Dockerfile time, then just invoke that script from the udev rule along with any needed parameters.
I'm attacking it using complementary containers and activating the container that's appropriate to the situation.
Mine is messier than yours (four components outside the container) whereas yours only seems to have one (not counting the cgroup rule).
Mine is also a lot slower to react than yours appears to be. The time to tear down the "printer off" container, then bring up the "printer on" container, and wait for OctoPrint to be open for business is a significant chunk of a minute.
On the plus side, my approach turns the camera off when the printer isn't in use. Have you thought about how to implement that kind of feature?
I'll chew on this for a while longer but, for now, my plan is to stick with my own approach and "await developments".
Your explanation of the cgroup rule was very informative.
Based on my experiments and your video, I think it's reasonably clear that ttyAMA0:ttyAMA0
was never going to work because the right hand side needed to be ttyACM0
. I think we can call that a typo.
A couple of things struck me as I watched your video. They are more Docker than OctoPrint. You probably know some or all of this but what you said as you went along made me think it might be worth making some notes and then writing it all down. Who knows, something might turn out to be useful.
First, the essential difference between docker
and docker-compose
is that the latter needs a compose file to work. That implies the former can run from anywhere.
In my case, all the action takes place in ~/IOTstack
so any docker-compose
command either needs:
$ cd ~/IOTstack
$ docker-compose ...
or the path to the compose file needs to be passed on the command:
$ docker-compose -f ~/IOTstack/docker-compose.yml ...
I'm a fast typist but I hate typing and I also find the need to remember these command differences really boring. I've defined a bunch of aliases (Paraphraser/IOTstackAliases) and shell functions which, with appropriate adaptation, might help you too. Each alias is basically the result of me making one-too-many mistakes and thinking "never again".
For example, if I want to bring up the entire stack:
$ UP
If I want to bring up just one container:
$ UP octoprint
You said something about "history" (in the sense of repeating previous shell commands) in a way which seemed to imply you weren't sure whether it was there.
The answer is that it depends on whether the container has been recreated. A "restart", which I would do as:
$ RESTART octoprint
doesn't recreate the container so anything the container has written to its non-persistent storage layer will still be there, including bash history.
The same happens on a stop
followed by a start
. The container hasn't gone anywhere so everything is intact.
Recreating the container either needs a "down" of the entire stack:
$ DOWN
or a "stop" then a "remove" of the stopped container, which I do as:
$ TERMINATE octoprint
If I want to be 100% sure there's no carryover, I use:
$ RECREATE octoprint
which has the effect of a TERMINATE octoprint
followed by an UP octoprint
.
Also, if you make a material change to a container's service definition in docker-compose.yml
, and then "up" the container, it will have the effect of terminating the old container and bringing up the new container according to the new service definition. That implies the loss of non-persistent storage.
Getting back to docker
vs docker-compose
, there's no need to use the latter for executing things inside the container. These two are synonymous:
Using docker-compose
:
$ cd ~/IOTstack
$ docker-compose exec octoprint bash
Using docker
:
$ docker exec -it octoprint bash
The only real wrinkle is the -it
("interactive terminal") flag:
docker-compose exec
connects a TTY by default,docker exec
only if you decide you're going to need it and pass the -it
flagdocker exec
is also a fair bit faster than docker-compose exec
, at least on my RPi3B+
$ date;docker-compose exec octoprint bash -c "echo hello";date
Tue 01 Jun 2021 11:55:51 AM AEST
hello
Tue 01 Jun 2021 11:55:57 AM AEST
$ date;docker exec octoprint bash -c "echo hello";date
Tue 01 Jun 2021 11:56:13 AM AEST
hello
Tue 01 Jun 2021 11:56:13 AM AEST
Six seconds vs almost instantaneous? I'll take fast every time!
I haven't had the need to spend much time inside the OctoPrint container but, if I did, I'd add another alias:
alias OCTOPRINT_SHELL='docker exec -it octoprint bash'
At least on my Pi, that would be the only alias starting with capital O so pressing "O" then tab then return and, bingo, I'm inside the container.
Other "aliases" (shell functions) are DPS and DNET which call docker ps
with various options and filters. Both docker ps
and docker-compose ps
produce so much information that lines tend to wrap. I hate that. It gets in the way of comprehension.
DPS focuses on the question "is it running?". Without any argument it lists the whole stack. Arguments (which are treated as wildcards for matching purposes) can restrict the output:
$ DPS
NAMES CREATED STATUS
octoprint About an hour ago Up About an hour
portainer-ce 43 hours ago Up 43 hours
$ DPS oct
NAMES CREATED STATUS
octoprint About an hour ago Up About an hour
DNET focuses on "how does it communicate?"
$ DNET
NAMES PORTS
octoprint 0.0.0.0:9980->80/tcp, :::9980->80/tcp
portainer-ce 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9002->9000/tcp, :::9002->9000/tcp
$ DNET oct
NAMES PORTS
octoprint 0.0.0.0:9980->80/tcp, :::9980->80/tcp
Another note I made as I watched your video is that you might find it useful to add the container_name: octoprint
directive to your recommended service definition. We do this with all templates on IOTstack as it mostly avoids having to deal with container IDs and improves the precision of commands.
If I'm going to follow a container's log, I generally shove it into a background task:
$ docker logs -f octoprint &
The task will die by itself if the container terminates or restarts, or you can clobber it (generally) with:
$ kill %1
When you were discussing:
Restart OctoPrint: s6-svc -r /var/run/s6/services/octoprint
you said there was no way for the container to restart the system, which I took to mean something like:
sudo reboot
sudo shutdown -h now
Generally, that's true. But, if executing external commands is a desirable feature for octoprint-docker, there is a way around it. See executing commands outside the node red container. I can't think of any reason why the approach I describe there for Node-RED would not work for octoprint-docker.
You could automate the key-generation on container first run but I don't think there's any way around manual interaction for the key exchange between the container and user (ie "pi"). However, once the keys have been exchanged, ssh inside the container can send any command it likes to user "pi" and that includes rebooting.
Personally, I don't see any real need for the OctoPrint UI to be able to reboot or shutdown the Pi. It's just as easy to do it via ssh. This suggestion is more aimed at the possibility of other situations where it could be useful if OctoPrint running inside the container could cause things to happen outside the container.
I'm not a maintainer of IOTstack. I'm more what you would call an active contributor. With many opinions. That I'm not afraid to share.
But for the drill-down into the video stream, I would probably not have noticed this.
$ curl -I 127.0.0.1:9980 2>&1 | grep Clacks
X-Clacks-Overhead: GNU Terry Pratchett
I'm a Terry Pratchett fan. It made the whole journey worth the candle.
And my name isn't even Art.
References: Clacks and Arthur Carry.
Following on from your feedback above and your YouTube video stream, I have completely revised the gist.
Instead of swapping profiles, any change of external device is sensed by a udev rule (as before), and then a reaction script inside the container is called. I added a volume mapping of /dev:/host/dev:ro
so that the container had visibility into the host's devices and could then play "follow the leader", with ability to control the device gained via a cgroup rule, as per your example.
In my quest to have the camera follow the printer, I also found it necessary to alter the s6 mjpg-streamer run file to stop it from looping continuously when the camera hadn't actually been mapped by the reaction script.
The gist provides a choice of three udev rule sets:
While I agree that #1 gets the job done, I actually prefer #3.
I made the "camera follows the printer" support optional so that users could choose between never on, always on, or "follow the printer".
I'd welcome feedback and also your considered reaction to these changes. If you think the modifications to s6 mjpg-streamer run file and/or the printerDidChange reaction script should be added to your repository so they are common to all builds, I'd be happy with that. Conversely, if you don't think that's appropriate then I'll continue on my current path of eventually incorporating this into the IOTstack template.
I believe the modifications you suggested to s6 run script for mjpg-streamer are actually on the backlog, so I would welcome those contributions!
The rest of it sounds really awesome, and I'll look more closely this weekend. I also prefer a more human-readable device name, and was tinkering with trying to find a set of udev rules that would match any printer and map it to /dev/octoprinter
on the host.
There are so many though, that that task became somewhat onerous. If we can accomplish it though, I'd still like to find a way to include that in this repo.
Nice work!
the mjpg_streamer run modifications related issue: https://github.com/OctoPrint/octoprint-docker/issues/163
Further to your suggestion/hint that a PR on this would be welcome, I thought I'd give it a whirl.
I forked the project, cloned the fork, set up a PR branch, etc.
Then, BEFORE I made any changes, I thought I would run through the CONTRIBUTING steps to establish a baseline so that I would know what a successful test looked like, after which I would make a change, re-test and complete the transaction.
The short version: I wasted an awful lot of time getting exactly and precisely nowhere.
The slightly longer version: I'm not going to do this again. Life's too short. Basically, while I do think it is entirely reasonable to predicate acceptance of a Pull Request on whether it builds on all architectures, I don't think it's reasonable to expect the contributor to do all the work, particularly when the documented test regime doesn't actually work on all platforms. It might work on yours but it definitely doesn't work on mine.
You might want to take a look at how AdGuardHome goes about it. For a practical example, go to pull/2898, find the area pictured below, and click the "View Details" button:
All of those tests were automatic, behind the scenes. All I had to do was twiddle my thumbs for the several hours it took to come back with a bunch of tick marks (after which I heaved a huge sigh of relief).
No muss, no fuss. I reckon that's how it should be done.
I freely admit that I haven't set up an automatic test process of this type. I have no idea of where to begin let alone any appreciation for the level of difficulty involved. I'm simply noting that the AdGuardHome people have done it, so it must be possible. I would also have thought that you, as the octoprint-docker guru, would have been able to place far greater reliance on test results coming from a system that you control, than you ever could on possibly wonky and suspect results coming from other people.
Still, that's just my 2¢.
Now some gory detail.
To summarise:
make build
and make up
work on both macOS and Raspbian.make setup-multi-arch
appears to work on Raspbian but errors on macOS.make test
doesn't complete on either platform.I normally prepare Pull Requests on a Mac so that's where I started. After cloning the fork, I was able to make build
and make up
, and connect to the GUI.
Then I tried:
$ make setup-multi-arch
docker run --privileged --rm tonistiigi/binfmt --install arm64,arm/v7,amd64
Unable to find image 'tonistiigi/binfmt:latest' locally
latest: Pulling from tonistiigi/binfmt
9e0174275344: Pull complete
f163282b5573: Pull complete
Digest: sha256:f52cfb6019e8c8d12b13093fd99c2979ab5631c99f7d8b46d10f899d0d56d6ab
Status: Downloaded newer image for tonistiigi/binfmt:latest
2021/06/18 12:35:56 installing: arm64 qemu-aarch64 already registered
2021/06/18 12:35:56 installing: v7 unsupported architecture: v7
2021/06/18 12:35:56 installing: amd64 cannot write to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: no such file or directory
{
"supported": [
"linux/amd64",
"linux/arm64",
"linux/ppc64le",
"linux/s390x",
"linux/386",
"linux/arm/v7",
"linux/arm/v6"
],
"emulators": [
"qemu-aarch64",
"qemu-arm",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-s390x"
]
}
The Mac doesn't have a /proc
directory so I have no idea what to make of that error message.
But I pressed on to make test
which barfed because I hadn't enabled experimental mode correctly. To be more precise, Docker Desktop wouldn't let me enable it.
A few days later that I figured out why and made Docker Desktop behave but I'll come back to that.
I switched my focus to a Raspberry Pi 4 (4GB RAM, USB-3 SSD) where I repeated make build
and make up
successfully.
A make test
seemed to start OK but it took f-o-r-e-v-e-r. I was so amazed by the time it was taking that I started grabbing screen shots every hour or so. The last one I got was:
That's the better part of 6.5 hours. Also note what it is currently doing.
Sometime after that, "something happened". I don't know what it was. When I came back for a periodic check, the SSH terminal session had closed. I nosed around but the Pi had not rebooted. Docker was still running OK but there were no additional images or anything else I could find to suggest the make test
had completed.
I started it again. The screen shot below is some 45 minutes later. You'll notice it went back to 62/69 (in the earlier shot it had been up to 68/69) and is still mucking about "copying files" to AMD64.
I gave it another hour. It still didn't seem to be making any progress so I gave it up as a bad joke and clobbered it.
After mulling it over for a day or so, I went back to the Mac to try to figure out why it wouldn't let me enable experimental mode.
It seemed to be a versioning issue and my guess is that, because I don't use Docker Desktop all that much (like months between launches), the "jump" it made from the version actually installed to the latest and greatest left one-too-many hurdles for the upgrade process to fix on the fly.
Having overcome that, I ignored the /proc
error mentioned before and fired up make test
. Now, keep in mind that this iMac ain't no Raspberry Pi. 2019 vintage, 3.8Ghz CoreI9 (16 logical CPUs) with 32GB RAM and 1TB SSD (with about 2/3 free). It's running Mojave so "tried and true" rather than living on the leading edge with Big Sur.
It ran for over 14 hours until I finally got bored and clobbered it. It never completed. It got to 68/69 and was stuck on that same "copying to AMD64".
Whatever.
Other people may be happy to play this game but it's not something I'm going to do again. As far as I'm concerned, I have the answer to the question I first thought of:
"are my changes better off in octoprint-docker or IOTstack?"
For me, IOTstack is the clear winner. It's aimed at the RPi so I don't have to worry about other architectures. I can easily test changes on the RPi, for the price of a local Dockerfile and a "build" that completes within a few seconds.
That's simple, reliable, effective and usable.
Some observations:
The statement "buildx enabled in the docker daemon (experimental features should be true)" has a lot of assumed knowledge that I didn't have. Given that, on the RPi, it boils down to:
$ echo '{ "experimental": true }' | sudo tee /etc/docker/daemon.json
$ sudo systemctl restart docker
I would've thought that might have made it into CONTRIBUTING.md, along with similar short hints (or at least URLs) for other platforms.
There is no explanation in CONTRIBUTING.md of what a successful make test
looks like, no guidance on how long make test
should take to complete, nor advice on how to tell when it might be stuck and how to make further progress.
I don't think users should ever be in the position of having to guess. To me it is axiomatic that, in order to be useful, a test tool must work first time, every time, and yield either a pass or fail. An unreliable test tool is, how can I put this in polite company: 🤬🤨
The main reason I've gone to the trouble of documenting my experience is so that you are aware that there are a lot of kinks in the test process. I hope something in this proves useful.
First, a disclaimer: I work on this project as the sole maintainer, and I get about 1-3 hours a month to dedicate to it.
Keep that in mind when giving feedback. While we all want professional product quality, when dealing with open source, it is somewhat of an expectation that users looking to contribute will be able to do some learning on their own in the case they are not familiar with the tools used.
That being said, yes... the CONTRIBUTING is absolutely out of date. It's really not even relevant anymore since we switched to GitHub actions.
I have a lot of documentation noted on the roadmap, and have been doing what I can to effectively prioritize and balance my time between documentation.
Now for the part you're not going to like...
I've made every attempt I can to work with you in real-time. It's clear that you are new to docker, and have advised your users to do things that are really bad ideas, or downright security issues. I asked to chat with you and you basically told me "how dare you assume we're in the same time zone".
You've lectured me at every opportunity you have, instead of trying to chat with me as you're figuring things out (and it's clear you're learning, because as you yourself have mentioned you figured things out after poking around for a while).
Despite all that, I get stuff like this:
Other people may be happy to play this game but it's not something I'm going to do again. As far as I'm concerned, I have the answer to the question I first thought of:
"are my changes better off in octoprint-docker or IOTstack?"
For me, IOTstack is the clear winner. It's aimed at the RPi so I don't have to worry about other architectures. I can easily test changes on the RPi, for the price of a local Dockerfile and a "build" that completes within a few seconds.
Well... that's great. The whole point of docker is that it is supposed to not be tied to a specific OS/hardware. So I don't have the luxury of just saying "it works on RPi".
This is an open source product, not a business. We fully acknowledge things are missing... and that's why we asked for help in the first place. Instead of helping to make it better, you get angry and unpleasant because our doc doesn't teach you how to use one of the oldest and well documented build systems in the world.
This is a product we've put our own blood sweat and tears into and offered it for free. I attempted to help you grow and improve your own piece of work even, and you've been condescending and rude to me at pretty much every interaction.
The main reason I've gone to the trouble of documenting my experience is so that you are aware that there are a lot of kinks in the test process. I hope something in this proves useful.
There is nothing you have shared that is not already documented and on our roadmap. What you've been documenting is your own journey learning how docker works.
I wish you the best.. but also ask that you consider the nature and spirit of open source before you post. I know it's frustrating to work at something for so long, and that makes it easier to vent than it does to be kind and respectful.
We love contributions, we know things are missing, and we want to work with our users and others to fill those gaps... but that means we all need to share an attitude of learning and improvement.
@LongLiveCHIEF - thank you for taking the time to look through and comment on SensorsIot/IOTstack pull request #328.
I have taken all your suggestions seriously. My objective is to improve the overall quality of the user experience for IOTstack users who choose to install OctoPrint/octoprint-docker as one of their containers.
Unfortunately, I am having a lot of trouble getting things to work as per your comments. I don't think it's an IOTstack issue. I think it's more likely to be either some fundamental misunderstanding on my part, or something peculiar about my setup. However, I am going to need your help to sort it out.
To summarise the stage I am at right now:
/webcam
or/webcam?action=stream
to work for external access to the camera feed./dev/ttyAMA0:/dev/ttyAMA0
to work.The rest of this issue is a blow-by-blow of my attempts thus far.
My setup is:
I only have the one printer so I lack the means to test other makes and models.
The starting point for what follows is:
basics:
service definition in
docker-compose.yml
:This incorporates two of your recommendations:
The omission of the port mapping:
per commit comment 51240522
The change of device mapping from:
to:
per commit comment 51240115
3D printer is off.
All non-default UDEV rules disabled.
Camera investigations
Bring up the container. The container appears to be happy (nothing of note in the logs). The camera LED has switched on.
Launch a browser and point to the Raspberry Pi on port 9980. The OctoPrint UI loads.
Clicking the "Control" tab shows the live feed from the camera.
Attempt to connect to the camera feed using your recommended approach of appending components to the base URL:
/webcam
/webcam?action=stream
Result in each case is 404 error.
CURL tests, from the Pi running the container:
show that the basic URL gets a sensible response:
test as per commit comment 51240522 - fails:
test as per commit comment 51240575 - fails:
Repeat the same CURL tests from another host:
To summarise:
I'm stuck.
Device investigations
Printer is off. Snapshot /dev:
Switch the printer on. Snapshot /dev:
Compare snapshots:
Switch to web UI. Default settings shown in UI are:
Click Connect button. Response from UI:
Conclusions:
ttyUSB0
which is only present in /dev when the printer is switched on.Device investigations - alternative 1
Printer off.
Change the device mapping to:
Start the container. Container fails to start, reporting:
Turn the printer on.
Start the container. Container comes up and web UI is available.
Click "Connect" button in UI:
Conclusion: right-hand-side mapping to ttyAMA0 doesn't work.
Device investigations - alternative 2
Printer on.
Change device mapping to:
Start the container and connect to UI. Two things in UI are different from previous test:
Conclusion: right-hand-side mapping needs to be ttyACM0.
Device investigations - summary
Bottom line: I can't get anything involving ttyAMA0 to work:
In my case at least, it seems that the right hand side has to be ttyACM0 and that, in turn, seems to have the side-effect of switching the UI from "Auto" to "/dev/ttyACM0" shortly after the UI launches.
In order to connect to the printer, the left hand side of the device mapping has to be some valid path to the printer device, whether that's:
None of those work if the printer is offline. I can't see any way of overcoming that problem, save for the "brute force" mechanism I documented at this gist.
I should probably make it clear that one of the goals agreed by the IOTstack users who are interested in the OctoPrint container is to have the service running 24×7, with the printer able to be switched on or off independently. The approach described in the gist does work (and not just for 3D printers) but I would prefer a solution that did not involve all the rigmarole involved in swapping between service definitions. The approach described in the gist also has the advantage of the camera only being enabled while the printer is switched on. That would be a nice-to-have feature of any improved solution.
The "if all else fails" and password reset
The purpose of the if all else fails documentation is to reassure IOTstack users that OctoPrint/octoprint-docker is a well-behaved container. In the IOTstack context, "well behaved" means that the container self-initialises its persistent storage area and doesn't not need any presets or other help. Not all containers are well-behaved so this is a positive and complimentary statement.
Nevertheless, explaining how to change passwords would a useful addition to the documentation so I thought I'd drill into that.
I did succeed but I had to use different commands.
I'm not sure whether what I'm about to relate is an OctoPrint/octoprint-docker issue, or an OctoPrint issue, or even an issue at all.
Starting position:
After the UI reloads, logout, then login again to confirm that "firstPassword" is in effect.
Run:
Use the mechanism in commit comment 51240342 to change "admin" password to "secondPassword":
Logout from UI. Login:
Conclusion: Password change has not come into effect.
Restart container and re-test. Same result. Still "firstPassword".
Conclusion: Password change has not come into effect.
Try an alternative command suggested by
--help
explorations:Logout from UI. Login:
Conclusion: Password change has not come into effect.
Restart container and re-test:
Conclusion: the command syntax and sequence needed to change the admin password from the CLI is:
Should the single
user add
command have worked? If yes, any idea why it did not work when I tried it?The
user password
command followed by a container restart does seem to work. Is there any reason why I should not document that as the recommended approach?