Open han-steve opened 1 week ago
@kevin85421 @rueian @MortalHappiness I think this regression was introduced in https://github.com/ray-project/kuberay/pull/2249?
The release notes for v1.2 around RayCluster status are guarded by a new feature gate RayClusterStatusCondition
. To my understanding, existing APIs for .status.state
and .status.reason
should be unchanged unless you enabled the new feature gate. Seems like a possible bug introduced when we refactored how reconcile errors are surfaced in https://github.com/ray-project/kuberay/pull/2249
Hi, thanks for taking a look. I wasn't aware that there's a feature gate. Can you tell me more? And for more context, the state machine used to fail with the reason field set when there's a resource quota issue. The new state machine doesn't seem to surface the error at all.
@han-steve Could you provide more detailed steps on how to reproduce this issue? I tried to reproduce the error, but it’s difficult to do so with the partial Go code you provided. For instance, I’m unsure what the value of tCtx
is, and it's challenging to reconstruct sampleJob
just by looking at the JSON response.
The most effective way for us to reproduce the issue would be a single YAML file that we can apply with kubectl
easily. Looks like the following:
apiVersion: v1
kind: ResourceQuota
(other fields...)
---
apiVersion: ray.io/v1
kind: RayJob
(other fields...)
---
apiVersion: v1
kind: ConfigMap
(other fields...)
Additionally, I searched through the codebase and found that the "ray.io/compute-template" annotation only appears in the apiserver
module, which hasn’t been maintained for a long time. Therefore, I’m unsure if this issue pertains solely to the apiserver
module.
cc @andrewsykim @kevin85421 @rueian
Hi, apologies for the confusion. The reproduction code follows the style of the apiserver integration test suite. Here's a yaml file reproduction:
apiVersion: v1
kind: ResourceQuota
metadata:
name: low-resource-quota
spec:
hard:
limits.cpu: 100m
limits.memory: 107374182400m
---
apiVersion: ray.io/v1
kind: RayCluster
metadata:
name: rayjob-test2-raycluster-jd6g6
namespace: test-namespace
spec:
headGroupSpec:
rayStartParams:
dashboard-host: 0.0.0.0
serviceType: NodePort
template:
metadata:
annotations:
ray.io/compute-image: rayproject/ray:2.9.0
ray.io/compute-template: test-anemone
labels:
sidecar.istio.io/inject: "false"
spec:
containers:
- env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: rayproject/ray:2.9.0
imagePullPolicy: IfNotPresent
name: ray-head
ports:
- containerPort: 6379
name: redis
protocol: TCP
- containerPort: 10001
name: head
protocol: TCP
- containerPort: 8265
name: dashboard
protocol: TCP
- containerPort: 8080
name: metrics
protocol: TCP
resources:
limits:
cpu: "0"
memory: "0"
requests:
cpu: "0"
memory: "0"
rayVersion: 2.32.0
workerGroupSpecs:
- groupName: small-wg
maxReplicas: 1
minReplicas: 1
numOfHosts: 1
rayStartParams:
metrics-export-port: "8080"
replicas: 1
scaleStrategy: {}
template:
metadata:
annotations:
ray.io/compute-image: rayproject/ray:2.9.0
ray.io/compute-template: test-anemone
labels:
sidecar.istio.io/inject: "false"
spec:
containers:
- env:
- name: RAY_DISABLE_DOCKER_CPU_WARNING
value: "1"
- name: TYPE
value: worker
- name: CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: ray-worker
divisor: "0"
resource: requests.cpu
- name: CPU_LIMITS
valueFrom:
resourceFieldRef:
containerName: ray-worker
divisor: "0"
resource: limits.cpu
- name: MEMORY_REQUESTS
valueFrom:
resourceFieldRef:
containerName: ray-worker
divisor: "0"
resource: requests.cpu
- name: MEMORY_LIMITS
valueFrom:
resourceFieldRef:
containerName: ray-worker
divisor: "0"
resource: limits.cpu
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: rayproject/ray:2.9.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- ray stop
name: ray-worker
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: "0"
memory: "0"
requests:
cpu: "0"
memory: "0"
It seems to be introduced by #2258 (https://github.com/ray-project/kuberay/pull/2258/files#diff-72ecc3ca405f1e828187748d4f1ec8160bccffa2a4f84a364cd7a94a78e1adb9L1152-L1157).
@kevin85421 Do you mean #2249 or #2258? You said 2249 but the link you provided is 2258.
@MortalHappiness sorry, I am referring to #2258.
Search before asking
KubeRay Component
ray-operator
What happened + What you expected to happen
In v1.1, when a RayCluster cannot spin up worker nodes due to a resource quota issue, it would have the following status
However, in v1.2, it simply says
First, the state should not be ready. According to the doc,
But not all pods in the RayCluster are running.
Second, the resource quota error should be added as a condition. The design doc alludes to emulating ReplicaSet conditions, which includes a type for resource quota error. Right now, the only place to find this error is in the operator logs:
This makes it impossible for users to self-serve and debug this error.
As mentioned in https://github.com/ray-project/kuberay/issues/2182, our current way of surfacing this error to the user when deploying a Ray Job is using a separate query to the Ray Cluster for the error:
However, the 1.2 update breaks the logic, so we cannot upgrade to 1.2 yet.
Reproduction script
Create a resource quota:
Deploy a Ray job:
Where the compute template is defined as
Anything else
No response
Are you willing to submit a PR?