Closed davidnewhall closed 1 year ago
The architecture names are the official ones used by gcc. It's a shame that Docker chose to deviate from them.
If Docker supports symbolic links, I can probably try to provide links from amd64
to x86_64
and from arm64
to aarch64
, if these are the only discrepancies.
How does Docker differentiate between armv7 soft-float and armv7 hard-float? Does it differentiate between the various flavors of 32-bit x86, such as i486 and i686? Because it's especially important in naming a toolchain.
The architecture names are the official ones used by gcc.
It's a shame a lot of tools didn't adopt the same names. go
also comes to mind.
If Docker supports symbolic links
In this case, you're uploading a 'real' file to the GitHub release. That file can probably be a symlink created by your build system before it gets uploaded, but it will be 'real' on the Release page and Docker will treat it correctly. Or did you have something else in mind?
How does Docker differentiate between ...
Good question, and I'm not an arm expert nor a docker expert, so this explanation may be best left to someone else. When building for arm you can pass in a third value for the version. So can also distinguish arm/v6 vs v7 which I think is the difference between soft float and hard float in your question. As far as I'm aware, it only supports 386
as an architecture name for the 32-bit x86 variants, and that's all I've ever built for. My favorite four platforms are arm64
, arm
(armhf
), 386
and amd64
. This captures 99% of my users.
Examples (from here):
linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64,
linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le,
linux/mips64, linux/arm/v7, linux/arm/v6
For Debian based containers I use a build script to determine the architecture:
apkArch=`dpkg --print-architecture`
if [ $apkArch = "arm64" ]
then
apkArch="aarch64"
fi
if [ $apkArch = "armel" ]
then
apkArch="armhf"
fi
if [ $apkArch = "i386" ]
then
apkArch="x86"
fi
The use of armhf for armel is a hack I use for Raspberry 1 and Zero. Which Debian does detect as armel as it only knows armv5 or armv7.
For Alpine I use apkArch=apk --print-arch
to determine the architecture. But has other quirks: amd64 == x86_64.
I do not use Redhat based containers but I suspect similar non-standard behavior.
In this case, you're uploading a 'real' file to the GitHub release. That file can probably be a symlink created by your build system before it gets uploaded, but it will be 'real' on the Release page and Docker will treat it correctly. Or did you have something else in mind?
No, uploading duplicate toolchains to GitHub is precisely what I want to avoid. My question was whether GitHub's actions support symlinking, i.e. providing the same artifact under a different name.
If it's not possible, then I'd rather stick to the current names; if it helps, I can add a paragraph to the documentation that tells what toolchain to use depending on the arch, and people can use a table lookup to automate this.
@synoniem - Thank you for input, and I concur, everyone seems to do this differently. I've been maintaining applications for a few years now and I've had to concede that there's no 1-right way to name an architecture. It's hard to get folks to agree on what platform means. Application maintainers have to be ready for the varying configurations that our integrations are built upon.
@skarnet - I get the impression you've never built multi-architecture images using TARGETARCH
. And that's fine, but your work around seems like it involves some sort of script, where I'm just... running docker buildx build
. It's just a matter of if your project supports it or not, and it currently does not.
To realistically work around this, one must automate an action that downloads your release files, renames them for docker and re-uploads them elsewhere. Otherwise, the consumers of the files have to work around this with 'random' scripts they come up with to fix the names within their Dockerfile or as part of their build process (see two comments up for example).
I've never written a GH action that builds a Docker container, but I have clicked the button on a repo that adds it for me. And it just worked. No scripts involved. I've also built multi-arch images in Docker Cloud using build hooks and buildx. It's challenging, but I could certainly download these files as part of the build hooks and rename them before COPYing them into the container.
Thanks for your consideration and time.
Tagging @jprjr (who wrote the GH actions) in case he knows an easy way to do this. If not, sorry but I'm not going to spend much time on it.
I tracked down the full list of supported Docker architectures here: https://github.com/docker-library/bashbrew/blob/master/architecture/oci-platform.go
From one of the earlier examples - I'm not sure what the difference is between say amd64
and amd64/v2
so I'm not going to worry about that.
I too, prefer sticking with the gcc-based names since they usually provide more detail on what's expected on the host system (ie, Docker uses 386
to generally mean "32-bit Intel", I have no idea what 32-bit processors will actually work with Docker. i686
better communicates what it requires). But odds are, we could just upload the i686
binaries and mark them as 386
and nobody would ever notice. Or i486
if we want to be extra cautious.
Duplicating the binaries to comply with TARGETARCH
wouldn't be too difficult to do - the main concern there is causing confusion, when somebody comes to the Releases page and sees x86_64
and amd64
; or i486
, i686
, and 386
, etc. But on the other hand we'd get to advertise TARGETARCH
support, which would be pretty cool. I can go either way on this.
I'm not sure what the difference is between say amd64 and amd64/v2 so I'm not going to worry about that.
higher baselines, i.e. sse4.2 min in that case, and so on
fwiw, the /v2
, /v3
notations do not show up in $TARGETARCH
. It's just be the second part of the platform
: 386
, arm
, arm64
, amd64
, etc. $TARGETVARIANT
contains the /v7
part.
BUILDPLATFORM — matches the current machine. (e.g. linux/amd64)
BUILDOS — os component of BUILDPLATFORM, e.g. linux
BUILDARCH — e.g. amd64, arm64, riscv64
BUILDVARIANT — used to set ARM variant, e.g. v7
TARGETPLATFORM — The value set with --platform flag on build
TARGETOS - OS component from --platform, e.g. linux
TARGETARCH - Architecture from --platform, e.g. arm64
Just dropping here how we resolved this issue in case it is useful:
# Add s6-overlay (See: https://github.com/just-containers/s6-overlay#s6-overlay-)
ARG S6_OVERLAY_VERSION="v3.1.3.0"
RUN wget "https://github.com/just-containers/s6-overlay/releases/download/${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz" -O "/tmp/s6-overlay-noarch.tar.xz" && \
tar -C / -Jxpf "/tmp/s6-overlay-noarch.tar.xz" && \
rm -f "/tmp/s6-overlay-noarch.tar.xz"
RUN [ "${TARGETARCH}" == "arm64" ] && FILE="s6-overlay-aarch64.tar.xz" || FILE="s6-overlay-x86_64.tar.xz"; \
wget "https://github.com/just-containers/s6-overlay/releases/download/${S6_OVERLAY_VERSION}/${FILE}" -O "/tmp/${FILE}" && \
tar -C / -Jxpf "/tmp/${FILE}" && \
rm -f "/tmp/${FILE}"
Here's what I plan to add to the README file with the next release:
The ${arch}
part in the s6-overlay-${arch}.tar.xz
tarball uses
the naming conventions of gcc, which are not the ones that Docker
uses. (Everyone does something different in this field depending on
their needs, and no solution is better than any other, but the Docker
one is worse than others because its naming is inconsistent. The gcc
convention is better for us because it simplifies our builds greatly and
makes them more maintainable.)
The following table should help you find the right tarball for you if you're using the TARGETARCH value provided by Docker:
${TARGETARCH} | ${arch} | Notes |
---|---|---|
amd64 | x86_64 | |
arm64 | aarch64 | |
arm/v7 | arm | armv7 with soft-float |
arm/v6 | armhf | Raspberry Pi 1 |
386 | i686 | i486 for very old hw |
riscv64 | riscv64 | |
s390x | s390x |
If you need another architecture, ask us and we'll try to make a toolchain for it. In particular, we know that armv7 is a mess and needs a flurry of options depending on your precise target (and this is one of the reasons why the Docker naming system isn't good, although arguably the gcc naming system isn't much better on that aspect).
What do you think? Would that help?
I don't know much about this project to be honest. When I found it, I thought it was focused on providing a first class process supervision product for Docker containers. After this interaction, and reviewing a dozen other issues and pull requests on the repo, I conclude that I gathered the wrong assumption.
A markdown table doesn't make TARGETARCH work. Not without the hack @komapa graciously provided. If you're cool with that code being required to use this project in a multi-architecture environment, then this issue is closable. Regards.
@davidnewhall You may have failed to realize that the reason why we do things like we do is precisely that unlike Docker, and unlike the people who don't know or don't care about the project and only want to run docker buildx build
and leave unconstructive comments when they don't immediately get their way, we actually want to make everything work right for users. Sometimes, that means not going the easy route, and sometimes, that makes it not as transparent for users as we would like. I recognize this is an unfortunate situation, but you're not paying us enough to give us attitude.
That said, I am not quite happy with the gcc naming scheme that we have, because it fails to take into account various small architecture differences, too; it cannot tell between amd64
and amd64/v2
, for instance.
@jprjr, if we want to support TARGETARCH
(and TARGETVARIANT
), we can probably make it work for everything but arm
; it will just require changing the naming scheme of the tarballs, so we would need to coordinate so the GitHub actions are updated at the same time as the build system. I can build tarballs for most of the TARGETARCH
+TARGETVARIANT
values if I can get their exact specs, which is probably doable for most of them. @nekopsykose, if you have a link that documents everything (e.g. where you found the difference between amd64
and amd64/v2
), I'm interested.
arm
, however, is hopeless. Garbage like arm/v6
and arm/v7
is meaningless, there are way too many incompatible flavours of arm
and it is impossible to produce working binaries without more details. (gcc is marginally better about this because it encodes the most important bit, soft float vs. hard float, in the architecture name, but it's also missing a lot of information.) So we need to pick precise existing hardware and target it. What are the most common armv6 and armv7 platforms that use containers these days? Or, even better, is there an exact spec for the various arm
variants of TARGETPLATFORM
?
(Edit: typo.)
I was debating whether I should respond to this at all.
Just wanted to add on, quickly, to @skarnet. We all come from different backgrounds. We're not all using docker in the exact same manner.
For example, at my current job we only use 1 architecture - amd64 - so TARGETARCH has just never been on my radar.
From re-reading the discussion - it's not like we made a decision. I suggested we maybe have some duplicate artifact names, @skarnet suggested adding a table to the readme. Both options were still open, as are other options.
"Supporting" using TARGETARCH has some questions that need to be answered. Like - what is the full list? What toolchains are we missing? Windows is listed in there - do we need to support that to claim we support TARGETARCH?
And some behind-the-scenes - I spent a good chunk of time learning about buildx, what if supports, and also checking if there's any possibility to do things like, set a BUILD ARG based on environment variables (the answer is no). It's not like I'm just flippantly saying "no we can't do this". This project predates the entire concept of TARGETARCH so we want to be conservative in how we approach this.
So all that said the most constructive response is just "no, we don't want to use shell logic and want to use TARGETARCH as-is, having a mapping isn't sufficient for our use case" and we could have gone from there.
I'd like to see which issues and pull requests, specifically, lead to the conclusion that we're not trying to support docker.
On February 11, 2023 11:03:15 PM EST, David Newhall @.***> wrote:
I don't know much about this project to be honest. When I found it, I thought it was focused on providing a first class process supervision product for Docker containers. After this interaction, and reviewing a dozen other issues and pull requests on the repo, I conclude that I gathered the wrong assumption.
A markdown table doesn't make TARGETARCH work. Not without the hack @komapa graciously provided. If you're cool with that code being required to use this project in a multi-architecture environment, then this issue is closable. Regards.
-- Reply to this email directly or view it on GitHub: https://github.com/just-containers/s6-overlay/issues/512#issuecomment-1426937211 You are receiving this because you were mentioned.
Message ID: @.***>
I agree that we should target platforms that users are actually running rather than platforms that Docker says we should target, so I'm okay with the selection we have at the moment. And we definitely shouldn't change because of one outlier complaint.
However, the whole platform naming business is indeed a mess, and although gcc's conventions are better than Docker's, they're not perfect by any means, and I'm thinking of a way to do it right: simply have codenames for the configurations we support (as we already do internally with the conf/toolchains file), but not only map the gcc names to them, as we're doing - instead, also map the Docker names to them. Like, have 3 fields in conf/toolchains: the gcc name, the Docker name (when applicable), and our internal codename.
The change wouldn't even be too much work. The hardest part is clearly getting the exact specs for the targets we want, then building the toolchains - and I just updated my build script so now building them mostly takes machine time.
What's stopping me, honestly, is the way GitHub publishes tarballs. I don't want to duplicate the same tarball under 3 different names. I want symlinks: s6-overlay-${TARGETARCH}_${TARGETVARIANT}.tar.xz
and s6-overlay-${GCC_ARCH}.tar.xz
should both point to s6-overlay-${INTERNAL_NAME}.tar.xz
, which would be the real file.
If there's no way to do this and we end up with a spaghetti plate of duplicated tarballs that use up space and would only confuse users, then I'm saying nope, not worth it.
(Edit: typo)
@nekopsykose, if you have a link that documents everything (e.g. where you found the difference between
amd64
andamd64/v2
), I'm interested.
they are "defined" here: https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope
i don't think they have any "meaning" however. the reason amd64/v2
exists is because TARGETVARIANT
exists and somebody merely defined it for v2 in their build. normally as you can see it's used for arm/v6
and the like, which means (or doesn't mean haha) what you'd expect. so, somebody merely defined a v2 for their amd64 build, and that means nothing afaict except that "they did it".
now, for what that /generally/ means specifically (literally amd64/v2
), that is the x86_64 micro-architecture levels, talked about in https://groups.google.com/g/x86-64-abi/c/b5tz7CC2z0w , which links to https://gitlab.com/x86-psABIs/x86-64-ABI/-/tree/master , which has a pdf, and so i quote (or rather, terrible copy paste):
Table 3.1: Micro-Architecture Levels
Level Name CPU Feature Example instruction
(baseline)
CMOV cmov
CX8 cmpxchg8b
FPU fld
FXSR fxsave
MMX emms
OSFXSR fxsave
SCE syscall
SSE cvtss2si
SSE2 cvtpi2pd
x86-64-v2
CMPXCHG16B cmpxchg16b
LAHF-SAHF lahf
POPCNT popcnt
SSE3 addsubpd
SSE4_1 blendpd
SSE4_2 pcmpestri
SSSE3 phaddd
x86-64-v3
AVX vzeroall
AVX2 vpermd
BMI1 andn
BMI2 bzhi
F16C vcvtph2ps
FMA vfmadd132pd
LZCNT lzcnt
MOVBE movbe
OSXSAVE xgetbv
(gcc/clang support this- -march=x86-64-v2
, ..)
and then i guess somebody used that in TARGETVARIANT in docker to express that the binaries were generated for a specific micro-arch. aside from that, i can't find any other info, it's just those "global variables" in the docker docs above.
Thanks. It's nice that there's at least some consensus on what amd64/v2
means, as in: if we wanted to make a toolchain that created binaries that are guaranteed to work and be optimal for "amd64/v2", we could.
That would be the pinnacle of ricing, i.e., completely ridiculous for IO-bound software like s6, but theoretically possible.
For other arches, the question remains: when a user is providing a TARGETVARIANT
, what exact instruction set are they expecting? Can we, in the general case, infer microarchitecture levels from TARGETARCH
and TARGETVARIANT
? And unless Docker decides to do the work on this and get a detailed spec out, I'm afraid that the answer is no, at the very least for arm
.
(Edit: formatting)
For other arches, the question remains: when a user is providing a TARGETVARIANT, what exact instruction set are they expecting? Can we, in the general case, infer microarchitecture levels from TARGETARCH and TARGETVARIANT?
i don't think it means anything as you stated the fiesta above. arm/v6 is as meaningless as "armv6" on any arbitrary distro.
but in the end, the distros do *do* it- they don't say "we won't support anything in armv6 fiesta land"- they pick some default (completely arbitrary -march, i.e. alpine is armv6zk + hardfloat, which afaik is rpi0?). so, if you were making the changes anyway... you could also pick something arbitrary. then if someone says "this doesn't work", well, that's unfortunate, but.. what else could you have done? you have to ship something or literally nothing.
and you have picked this default already- the example table you posted could indeed be the v6/v7 targets. you are building the binaries anyway (the arm/armhf named tarballs), so might as well. unless you plan to add even more archs (v6-123, v6-abc, v6-whatever), you already have only one.
On 13-02-2023 16:11, alice wrote:
For other arches, the question remains: when a user is providing a TARGETVARIANT, what exact instruction set are they expecting? Can we, in the general case, infer microarchitecture levels from TARGETARCH and TARGETVARIANT?
i don't think it means anything as you stated the fiesta above. arm/v6 is as meaningless as "armv6" on any arbitrary distro.
but in the end, the distros do do it- they don't say "we won't support anything in armv6 fiesta land"- they pick some default (completely arbitrary -march, i.e. alpine is armv6zk + hardfloat, which afaik is rpi0?). so, if you were making the changes anyway... you could also pick something arbitrary. then if someone says "this doesn't work", well, that's unfortunate, but.. what else could you have done? you have to ship something or literally nothing.
and you have picked this default already- the example table you posted could indeed be the v6/v7 targets. you are building the binaries anyway (the arm/armhf named tarballs), so might as well. unless you plan to add even more archs (v6-123, v6-abc, v6-whatever), you already have only one.
— Reply to this email directly, view it on GitHub https://github.com/just-containers/s6-overlay/issues/512#issuecomment-1428107819, or unsubscribe https://github.com/notifications/unsubscribe-auth/APID6ZXGDN2FYYY3ONGN4PLWXJFJJANCNFSM6AAAAAAUCX2MDI. You are receiving this because you were mentioned.Message ID: @.***>
I see that my earlier reaction went to Laurent alone. So repost:
The situation on ARM is not that bad imo. You have arm/v5 (armel) with no floating point hardware, arm/v6 with limited floating point hardware (armhf), arm/v7 with complete floating point hardware and arm/v8 as the 64 bit version. Afaik is Raspberry Pi Foundation the only one who still has some arm/v6 around but targeting them as arm/v5 will not change much performance wise (both are dog slow). Which is the choice of the official Debian distro you have or armv5 (armel) or armv7 (arm). Armv6 (armhf) has been considered a hack and not supported afaik and the main reason Raspbian now Raspi OS exists.
The situation on ARM is not that bad imo. You have (insert long list of distro-specific stuff literally every distro names differently that is an entire page long, and it still only describes what debian/raspios is doing)
yes, that is exactly the point.. when someone says "armhf", you have to ask "but which armhf?".
The situation on ARM is not that bad imo. You have (insert long list of distro-specific stuff literally every distro names differently that is an entire page long, and it still only describes what debian/raspios is doing)
yes, that is exactly the point.. when someone says "armhf", you have to ask "but which armhf?".
Not really different from i386,i486,i586,i586 and amd64 you have just 4 hardware reference processors: arm v5 to v8. How to call them is one thing but based on the gcc naming you at least can choose a feasible tool chain. And leave the naming scheme to docker or the repo.
what?
I understand what they're saying, but this is not going to be a priority any time soon, especially if GitHub doesn't have symlinks. Closing this because I don't want to be probably endlessly pinged in a probably endless discussion; if you want to keep exchanging, please do it somewhere else.
Another workaround is to try using Linux distro package. For example, Alpine Linux s6-overlay package worked very well for me:
FROM alpine:3.18
RUN apk add --no-cache s6-overlay darkhttpd
COPY s6-overlay-root /
# ... put static web content to /app/dist
CMD darkhttpd /app/dist
ENTRYPOINT ["/init"]
(This example runs darkhttpd as a simple static web server for /app/dist
content. Adjust Alpine image version according to the Alpine release.)
s6-overlay-root/
contains /etc/
configuration tree, as described by s6 / s6-overlay documentation. For example:
s6-overlay-root/
└── etc
└── s6-overlay
└── s6-rc.d
├── service2
│ ├── dependencies.d
│ │ └── service1
│ ├── run
│ ├── type
│ └── up
├── service1
│ ├── run
│ └── type
└── user
└── contents.d
└── service2
Is
TARGETARCH
supposed to work? Mine resolves to amd64, and that file does not exist. I like building multi architecture Dockerfiles, but if the file names provided do not match the architecture names docker uses it makes writing a multi-arch file difficult.This is a request to duplicate and upload the files with Docker-matching architecture names to the releases page. Or tell me how I've screwed this up. Totally willing to accept that I'm doing this wrong.
Then
which produces
because this url does not exist: https://github.com/just-containers/s6-overlay/releases/download/v3.1.3.0/s6-overlay-amd64.tar.xz and this one does: https://github.com/just-containers/s6-overlay/releases/download/v3.1.3.0/s6-overlay-x86_64.tar.xz
The
arm64
architecture has the same issue since the file is namedaarch64
.Thoughts? Thanks!