Closed successtheman closed 6 months ago
Interesting. I don't believe the docker file is machine dependent. On top of that, you really don't use the ghcr.io file since you should be mounting a volume with the entire Cabernet source in it as stated in the instructions. So... I think you are "barking up the wrong tree", but I am not in any way an expert on docker.
Correct as the instructions stated I mount the volume with Cabernet source.
When using the ghcr.io image, there seems to be some amount of lag in terms of loading streams and such, I am going to some more testing today to see if maybe it was a fluke when I tried before.
After testing, I can confirm that there is definitely some lag when trying to use the ghcr image on RPI4 (arm64) and this is due to it relying on qemu-user-static to emulate amd64 on arm64, so it seems the docker image is architecture dependent after all. I uninstalled qemu-user-static and the image doesn't start anymore on the rpi so that's how I confirmed it. With that said however, you may be right and the image may not be reliant on the docker image itself just the source mounted into the docker container (which is probably why the eblabs image is still working).
The source is machine independent; however, it does need python3 and ffmpeg binaries to run. These are provided externally by docker and should be associated with the arch you are using. The configuration settings will point to the ffmpeg binaries folder or assume it is picked up by the PATH env variable. See Dockerfile_tvh_crypt.alpine for more information on the installs.
Thanks for pointing me in the right direction, I also found something called dedockify which allowed me to somewhat reverse engineer the eblabs image so I will probably end up using a combination of both of them. I just need to figure out how to specify the architecture in the image itself
I was about to create this issue.
I use Gitlab CI to build my arm64 images, the same can be done with GitHub (but i never used)
How to build multi-platform Docker images in GitHub Actions:
Example of my .gitlab-ci.yml
:
variables:
DOCKER_HOST: tcp://docker:2375/
build:
image: jonoh/docker-buildx-qemu
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Use docker-container driver to allow useful features (push/multi-platform)
- docker buildx create --driver docker-container --use
- docker buildx inspect --bootstrap
script:
- update-binfmts --enable # Important: Ensures execution of other binary formats is enabled in the kernel
- git clone --depth 1 https://github.com/cabernetwork/cabernet.git ./app
- cd ./app
- sed -i 's/--no-binary=cryptography //' Dockerfile_tvh_crypt.alpine
- docker buildx build --platform linux/arm64 --pull -f Dockerfile_tvh_crypt.alpine -t "$CI_REGISTRY_IMAGE" --push .
Needed to remove --no-binary=cryptography
, that would take mode than 1h to build, the limit of Gitlab (free) for CI builds.
Also buildx
can be used with multi-arch docker buildx build --platform linux/arm64,linux/arm/v7,local ...
I'm running on Raspberry Pi 4, no issues so far
My yml file looks a lot different. https://github.com/cabernetwork/cabernet/blob/master/.github/workflows/docker-image.yml Someone else created it so I am no docker expert. It generally works.
GitlabCI and GitHub use different syntax, it's better to follow the examples/guides above.
Maybe ill try to create it, and create a PR
GitlabCI and GitHub use different syntax, it's better to follow the examples/guides above.
Maybe ill try to create it, and create a PR
This is probably the best option, since Rocky and I are far less experienced with docker (and ARM specifically in this instance). I'm gonna look around and see if I can learn some stuff over the next few days and then maybe I'll be able to help you improve it after you submit the PR.
python cryptography really need to be build, or can just use pip install cryptography
?
It just takes long time to build cryptography 1h+ per arch, using wheel build just takes a few minutes.
cryptography is just a pip install
If Cabernett does not find the module installed and pip is available, it will auto-execute the pip install command to install the module.
cryptography is just a pip install
In most Linux distributions it is a simple pip install. But Alpine is different and 'pip install' on its own will fail. That is why the slim-buster dockerfile does a pip install and alpine ends up having to install gcc and other modules then has a long build to get it installed.
Test build without --no-binary=cryptography
docker run --rm -it -p 6077:6077 ghcr.io/generator/cabernet:latest
You can check the workflows https://github.com/Generator/cabernet/actions/ Ill create a PullRequest after some more testing (version tags)
EDIT: archs amd64 (x86_64), arm64 Couldn't build with armv7
Looks like you made some progress with it today, also I do wonder about those warnings for node12 and such honestly, I got them for some other docker images I was testing in the past (it was a few years ago though so I have kind of forgot about it at this point)
According to this post, we shouldn't be using alpine for python
https://pythonspeed.com/articles/alpine-docker-python/
- Make your builds much slower.
- Make your images bigger.
- Waste your time.
- On occassion, introduce obscure runtime bugs.
Trying to improve docker images with multi-stage builds , however they got bigger!
New test builds (alpine), for amd64(x86_64),arm64
docker run --rm -it -p 6077:6077 -p 5004:5004 --pull always ghcr.io/generator/cabernet:dev
I'll test with python:slim latter, and see if improves performance and size.
Also from https://cryptography.io/en/latest/installation/#rust
If you are using Linux, then you should upgrade pip (in a virtual environment!) and attempt to install cryptography again before trying to install the Rust toolchain. On most Linux distributions, the latest version of pip will be able to install a binary wheel, so you won’t need a Rust toolchain.
So there's no need to build cryptography from source
Trying to build cryptography from source reach job limit of 6h building on arm64 (amd64 only a few minutes) https://github.com/Generator/cabernet/actions/runs/7277002858
@Generator just a note here, you can self host workflow builders on a remote server to bypass github build limits for testing, I have used this one for another test at some point https://hub.docker.com/r/myoung34/github-runner
Pull Request posted, see README for instructions
Change the image to ghcr.io/generator/cabernet:dev
for the last test build
There are quite a few errors in the PR and will need to be fixed before it can be pushed.
@rocky4546 @Generator any update on this? I see the pull request is still sitting there and I'm eager to try out the native arm64 build
Edit: It actually appears it was merged today or late last night, I think the PR page was cached from when I checked it earlier last night so it appeared to still be open
I have more changes, but still needs testing. Additional container options for IP, auto-add plugins and more. I'll make a new PR when it's done.
I have more changes, but still needs testing. Additional container options for IP, auto-add plugins and more. I'll make a new PR when it's done.
Sounds great! looking forward to testing it
Thank you Generator for the work so far. I have pushed the changes into the dev branch and it did build into ghcr.io/cabernetwork/cabernet:dev. Interesting that the "latest" still pulls the master branch, but that is a good thing. Need to do some testing to see if there are any impacts to the app. The dev branch does seem to have updated correctly for testing.
I made some minor tweaks to the docker files. Tested on both Ubuntu and windows. All worked very well. Only issue, not sure about, is when we have an upgrade for Cabernet that also requires an upgrade to the plugins. Cabernet currently does not handle this well when you try to upgrade Cabernet separately from the plugin upgrade. Still thinking about it.
One other note. The key.txt file is only used for encryption of text in the config.ini file such as passwords. In the past, we used this when accessing locast since it required a user to login. If we add a zap2it plugin, that too would require a password. Currently, there are no uses for the encryption, so it is not a big deal to have Cabernet just create a new file, if not present.
Only issue, not sure about, is when we have an upgrade for Cabernet that also requires an upgrade to the plugins.
So the plugins need to be upgrade with the docker image?
That could be done with entrypoint script, already have a auto-install for plugins for the next PR, but that will be a hacky solution (which already if for auto-install), the app itself should verify and update plugins on start.
The key.txt file is only used for encryption of text in the config.ini file such as passwords.
In that case, key.txt
could be on /app/data/
and symlinked to $HOME/key.txt
, the same for /app/plugins_ext
(/app/data/plugins_ext
).
And use only a single volume /app/data/
for all app data instead of three different volumes.
Auto-install plugins on restart sound like a winner. I will look into it and have it part of this release. Also, found a bug for non-docker versions during the cabernet upgrade. Will have that update today.
Question: Can the Cabernet version (found in the utils.py file) be somehow used so that the description in the ghcr.io contains the version. It is difficult to tell what version belongs to which upload.
FYI... Not sure what is causing it, but the number of Docker downloads per month is 246,000. The number of downloads for the releases is under 500. It looks like some kind of upgrade check is running a ton and looking like a download. It really should only be running once a day. As we get more users, we could reach some kind of GitHub limit caused by users setup of their docker upgrades.
Don't see any GitHub limitations for docker pulls. Only workflows which is more than needed
Plan for docker upgrading the version of Cabernet is to: 1) Detect that the version has changed from the previous start and run the patch upgrade on the ini and db files 2) If there was a version change to Cabernet, also check for plugins to upgrade. If the upgrade is not compatible with the current version, it will abort the upgrade of that plugin 3) If Cabernet is upgraded by Cabernet, then the previous version value will be updated so that the version detection of a change will not occur
This should also work when people go in and manually do a source overwrite.
EDIT: Confirmed it was related to the ARM compatibility changes however it was a very simple fix shown here
Could my issue here I have recently encountered regarding all my channels settings etc not loading be related to the changes for ARM compatibility?
Would it be possible to edit the ghcr.io docker build to add arm64/armv7 support (for raspberry pi)? I have been using this build by eblabs which has arm support https://hub.docker.com/r/eblabs/cabernet/tags for a while now, and I think I may be having issues related to using an outdated docker image (about a year old now).
I would try to edit the image myself to add arm64/armv7 support but I have no idea where to begin