openwrt / asu

An image on demand server for OpenWrt based distributions
https://sysupgrade.openwrt.org
GNU General Public License v2.0
314 stars 80 forks source link

Instructions for running the ASU Server locally #525

Open supersebbo opened 1 year ago

supersebbo commented 1 year ago

Hi,

Does anyone have a resource with some better instructions for hosting the ASU server locally? There are some sparse instructions in the README but they are incomplete and don't work on a fresh Debian host.

Error:

podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 3.0.1
** excluding:  set()
['podman', 'inspect', '-t', 'image', '-f', '{{.Id}}', 'localhost/aparcar/asu:latest']
Error: error inspecting object: unable to find 'localhost/aparcar/asu:latest' in local storage: no such image
podman build -f ./Containerfile -t localhost/aparcar/asu:latest ./
STEP 1: FROM python:3.10-slim
Error: error creating build container: short-name "python:3.10-slim" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
['podman', 'inspect', '-t', 'image', '-f', '{{.Id}}', 'localhost/aparcar/asu:latest']
Error: error inspecting object: unable to find 'localhost/aparcar/asu:latest' in local storage: no such image
podman build -f ./Containerfile -t localhost/aparcar/asu:latest ./
STEP 1: FROM python:3.10-slim
Error: error creating build container: short-name "python:3.10-slim" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
['podman', 'inspect', '-t', 'image', '-f', '{{.Id}}', 'localhost/aparcar/asu:latest']
Error: error inspecting object: unable to find 'localhost/aparcar/asu:latest' in local storage: no such image
podman build -f ./Containerfile -t localhost/aparcar/asu:latest ./
STEP 1: FROM python:3.10-slim
Error: error creating build container: short-name "python:3.10-slim" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
['podman', 'ps', '--filter', 'label=io.podman.compose.project=public', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
podman volume inspect public_podman-sock || podman volume create public_podman-sock
['podman', 'volume', 'inspect', 'public_podman-sock']
['podman', 'network', 'exists', 'public_default']
Error: unrecognized command `podman network exists`
Try 'podman network --help' for more information.
['podman', 'network', 'create', '--label', 'io.podman.compose.project=public', '--label', 'com.docker.compose.project=public', 'public_default']
Error: the network name public_default is already used
Traceback (most recent call last):
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 720, in assert_cnt_nets
    compose.podman.output([], "network", ["exists", net_name])
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 1098, in output
    return subprocess.check_output(cmd_ls)
  File "/usr/lib/python3.9/subprocess.py", line 424, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['podman', 'network', 'exists', 'public_default']' returned non-zero exit status 125.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/seb/.local/bin/podman-compose", line 8, in <module>
    sys.exit(main())
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 2941, in main
    podman_compose.run()
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 1423, in run
    cmd(self, args)
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 1754, in wrapped
    return func(*args, **kw)
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 2067, in compose_up
    podman_args = container_to_args(compose, cnt, detached=args.detach)
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 903, in container_to_args
    assert_cnt_nets(compose, cnt)
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 761, in assert_cnt_nets
    compose.podman.output([], "network", args)
  File "/home/seb/.local/lib/python3.9/site-packages/podman_compose.py", line 1098, in output
    return subprocess.check_output(cmd_ls)
  File "/usr/lib/python3.9/subprocess.py", line 424, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['podman', 'network', 'create', '--label', 'io.podman.compose.project=public', '--label', 'com.docker.compose.project=public', 'public_default']' returned non-zero exit status 125.
a-gave commented 1 year ago

Hi, I try to give a hint because I setup this project just few days ago and I can recognise this error:

subprocess.CalledProcessError: Command '['podman', 'network', 'exists', 'public_default']' returned non-zero exit status 125.

It seems to me that you are trying the latest unreleased version of asu. Checkout the version v0.7.20 for a stable release. (plain or docker based) It seems you could be running a debian version < 12 and so not the latest podman version (3.0.1 as in the error log you posted). So the command podman network exists <network_name> does not yet exist.

$ podman -v
podman version 4.3.1
supersebbo commented 1 year ago

Thanks. So I checked out the previous version (it took me a while to realise this project is undergoing some major changes). Following the old README, I now have the local server running (non-docker, which is fine for me because this is purely for local use).

I am having some challenges testing because ASU caches requests so if you get a build failure the API responds with the same build failure even once you've fixed the underlying problem. Is there a quick and dirty way to flush the cache?

supersebbo commented 1 year ago

You have to laugh. I've spent hours doing this because I needed to regularly build custom sized images, only to discover it doesn't work because the imagebuilder code doesn't honor the ROOTFS_PARTSIZE parameter, making some of the code in this tool useless. Namely:

    # Check if custom rootfs size is requested
    if rootfs_size_mb := req.get("rootfs_size_mb"):
        job.meta["build_cmd"].append(f"ROOTFS_PARTSIZE={rootfs_size_mb}")

I noticed @aparcar has committed a change to imagebuilder to support this but looks like it won't be supported until 23.05. https://github.com/openwrt/openwrt/commit/7b7edd25a571568438c886529d3443054e02f55f#diff-d13d0140d2fb172af4d69a97e4ec0f6a227c579ea9023b408e826039ce96bbeb

a-gave commented 1 year ago

I also experienced this because It sometimes goes in timeout (after 10 minutes) between pulling the imagebuilder and building.. You can remove the corresponding request_hash from where the firmware image should be stored e.g. asu-service/public/store/<request_hash> and from the redis database and request a new build.

rm -rf ./asu-service/public/store/90fd5fd04fc7cb35fbca81a22333ee19/ ;
redis-cli KEYS *90fd5fd04fc7cb35fbca81a22333ee19 | xargs redis-cli DEL

Or, another uglier workaround, using the web interface is to provide a little different set of packages, for example adding iperf3 to the packages list it will generate a different hash.

I noticed @aparcar has committed a change to imagebuilder to support this but looks like it won't be supported until 23.05. https://github.com/openwrt/openwrt/commit/7b7edd25a571568438c886529d3443054e02f55f#diff-d13d0140d2fb172af4d69a97e4ec0f6a227c579ea9023b408e826039ce96bbeb

Ook, and so, there will be :)