Closed ArvinB closed 11 months ago
OK so you're still using download_vsix.sh
?
That's currently the part that I'm focusing on since we have to rebuild the vsix plugins internally for Red Hat (just for context in case I do not comment on this again immediately) but I will definitely take another closer look when I get to that part of the build.
Thank you for sharing!
@SDawley Yep using download_vsix.sh
unless you replace it with something else in 3.6. A couple notes here with this is that most vsix
files are architecture independent, while some language ones are not...thus this is needed again for multi-arch builds.
While on the subject of this script, which seems to work most of the time...
This curl
in particular will sometimes lead to 502 Bad Gateway which then causes the next jq
command to fail ending the whole build process. This is especially true during daytime when traffic is at its highest. I'm just guessing open-vsx.org can't handle the load sometimes.
vsixMetadata=$(curl -sLS "https://open-vsx.org/api/${vsixName}/latest")
Tiny suggestion, but of course you can use this how you like, here is an augmentation I made that corrects for this I would say over 95% of the time:
response_code=$(curl -sLS -o metadata.txt -w "%{http_code}" "https://open-vsx.org/api/${vsixName}/latest")
vsixMetadata=$(cat metadata.txt) && rm -f metadata.txt
if [[ $response_code != "200" ]]; then
sleep 3
vsixMetadata=$(curl -sLS "https://open-vsx.org/api/${vsixName}/latest")
fi
So here I'm just putting the response to a file and instead capturing the status code of the curl request. If it's not a 200, then I wait 3 seconds and try again and if the issue continues I assume there is something else beyond our control going on. The jq
tool will then fail as before. The -S is supposed to spit out any errors, but that only applies if curl
itself fails.
Slightly better iteration:
for i in {1..5}
do
response_code=$(curl -sLS -o metadata.txt -w "%{http_code}" "https://open-vsx.org/api/${vsixName}/latest")
vsixMetadata=$(cat metadata.txt) && rm -f metadata.txt
if [[ $response_code == "200" ]]; then
break
fi
sleep 3
echo "Retrying...https://open-vsx.org/api/${vsixName}/latest"
done
@ArvinB I don't know if this will be of any help at all, but I recently built a self-hosted instance of the open-vsx registry that has support for pulling the VSIX files onto a bastion host before uploading them to the OpenShift hosted VSX registry.
I rewrote the VSIX download/upload script to use a YAML file instead of JSON, and added logic to it to also resolve extension dependencies.
One major advantage of this method is that you don't have to rebuild the whole thing just to add new or updated extensions.
It is still WIP. But it does work. I'm also going to look at turning it into an operator for self-hosted open-vsx in disconnected environments.
Thanks @cgruver the script you reworked looks solid! If your work is incorporated into DS 3.6 then I for sure will adopt it.
I think I have everything building now for x86_64
and s390x
. However on ppc64le
the gradle build of the ovsx-server seems to fail.
./gradlew --no-daemon --console=plain assemble
I noticed you are pulling the binaries from only the x86_64
ovsx-server image. Got any clues about this one?
The only reason that I'm just pulling the x86_64
VSIX bundle is because the cluster that I built this for is exclusively Intel architecture. There's no reason that you couldn't pull other architectures if they are supported by the extension. Most extensions are architecture agnostic.
Issues go stale after 180
days of inactivity. lifecycle/stale
issues rot after an additional 7
days of inactivity and eventually close.
Mark the issue as fresh with /remove-lifecycle stale
in a new comment.
If this issue is safe to close now please do so.
Moderators: Add lifecycle/frozen
label to avoid stale mode.
Summary
Hi @SDawley
I was speaking to @nickboldt and he recommended I communicate my suggestions over to you.
RE: https://issues.redhat.com/browse/CRW-4049
I work heavily downstream adopting RH Dev Spaces to support mainframe developers.
Based upon Dev Spaces 3.5, I see that you are working on making things a bit easier to build the pluigns in 3.6. I am currently working on a single
Dockerfile
to do much of what thebuild.sh
does. The issue we have with the build script is that it works great for a single architecture image...but for multi-architecture images it's not as friendly as having aDockerfile
and utilizingdocker buildx build...
Relevant information
@SDawley This is a work in process based upon Dev Spaces 3.5...but there it works thus far. So I'm sharing here.
One very important note is that I'm using Semeru Open JDK vs the normal Open JDK, because it has a JIT. This makes compiling the binaries, especially on multi-architectures, a lot faster.