jboss-container-images / openjdk

Source To Image (S2I) image for Red Hat OpenShift providing OpenJDK
Apache License 2.0
53 stars 58 forks source link

[OPENJDK-2735] Move grep and gawk installation to phase-1 #472

Closed jhuttana closed 3 months ago

jhuttana commented 3 months ago

As suggested I have made an attempt to move grep and gawk installation to phase-1. But with or without this change my builds are coming out with

unable to retrieve container logs for cri-o:.........

When I tried to check on web console as adminstrator I see that it is some ephemeral-storage issue.

Generated from kubelet on crc-vlf7c-master-0
The node was low on resource: ephemeral-storage. Threshold quantity: 4902142351, available: 4730208Ki. Container docker-build was using 4Ki, request is 0, has larger consumption of ephemeral-storage

I deleted all builds/pods/imagestreams but there is no effect :) Yesterday I was trying to create a deployment which was continuously failing and creating successive pods after every failure. I guessed that might have consumed all the memory and deleted all pods in that project. Now on local CRC I don't have any builds. Even then builds are failing with that cri-o error.

jmtd commented 3 months ago

When I tried to check on web console as adminstrator I see that it is some ephemeral-storage issue.

I hit this regularly with CRC. I would like to find a way to increase the amount of storage it has.

I deleted all builds/pods/imagestreams but there is no effect :)

This hasn't worked for me either. When I hit this issue I have ot blow away the whole cluster and start a fresh (crc delete, crc start)

It looks like passing -d (Total size in GiB of the disk used by the instance (default 31)) to crc start will create a bigger disk.

jhuttana commented 3 months ago

When I tried to check on web console as adminstrator I see that it is some ephemeral-storage issue.

I hit this regularly with CRC. I would like to find a way to increase the amount of storage it has.

I deleted all builds/pods/imagestreams but there is no effect :)

This hasn't worked for me either. When I hit this issue I have ot blow away the whole cluster and start a fresh (crc delete, crc start)

In my case even stop and start didnt work :D I just reconfigured crc and then it worked.

It looks like passing -d (Total size in GiB of the disk used by the instance (default 31)) to crc start will create a bigger disk.

That will help.