Open izderadicka opened 2 months ago
Defaults have always been to get the cluster operational.
Adding additional pods failed - nginx or quarkus example - Evicted because lack of memory of the node
In that case, you have to add memory. With newer versions, this might become more necessary(or you were able to get away with just the defaults), though it is always necessary to adjust the memory and CPU resources according to the payload you are running. This has always been the case...
nginx or quarkus example
can you indicate which Quarkus example you used? As I would not see why nginx or httpd would cause these issues, as they are very minimal in footprint.
For nginx I used example using Template, For quarkus I use Basic Quarkus - DevFiles .
I started with Basic Quakus - it did not deploy - I think it was due to memory or maybe image problem, then deleted tried nginx - again problem with memory. Increased memory +2G - directly on VM.
Later after restarts all cluser went bad - with many pods not staring due to disk pressure on node.
Started completly new instance - with 13G memory, 50 G disk - looked better, but did not played with it much.
This issue is just FYI - it was my impression that resources are on their limit - maybe only local problem. I wanted to save some troubles to other new users. Feel free to close at your discretion - setting up these params is easy so I know what to do.
I wanted to save some troubles to other new users.
We are trying to improve this situation, but this will mostly be around "messaging"; we will have to report to the user there is pressure on the cluster and resource need to be increased. I agree with your point and would like to determine what can be done to make this more obvious. We might increase the default a little, though this will only delay the actual problem.
It is mostly the memory; on our machines with 32G we mostly assign 16/18G of memory. On Windows/macOS people are more likely to also use the machine for other activities, like browsing and leaving tabs open. This will introduce memory pressure on lower specced machines.
General information
crc setup
before starting it (Yes/No)? YesCRC version
Host Operating System
Steps to reproduce
After starting Openshift with default configuration of memory {10Gi) and disk (31G) I had problems to work with cluster 1) Adding additional pods failed - nginx or quarkus example - Evicted because lack of memory of the node 2) After couple of restarts of Opeshift there were about 800 pods Evicted due Disk pressure - cluster was accessible, but a lot of services were not started - like image registry ....
Expected
Default configuration should enable to play around with Openshift - at least provide space for some new pods and should be stable - not to break down due to lack of resources after simple experiment (nginx and quarkus exapmple) and couple of restarts.
Actual
I think default config of resource is just on the edge to run cluster itself, at least this is my impression, I did not do any fancy stuff, just tried couple of examples and this caused problems.