Closed SarahFrench closed 15 hours ago
This test is also affected by quota limits, possible exacerbated by increasing the # of parallel tests
This seems to be flipping between quota issues and the other issue. I'm going to keep this as a service/dataproc
issue; if it turns out we need to increase quota we can do that later.
Possibly the node groups here should also use a tf-test prefix.
Currently 50% failure - 62 / 124.
@melinath I ran the labeler today and the service/dataproc
label was re-added because of the Affected Resource(s)
block. It looks like the service/compute-sole-tenancy
label was added at one point because the formatting was off such that the rest of the description was included as Affected Resource(s)
, and google_compute_node_group
was found.
I think we want to keep service/dataproc
and remove service/compute-sole-tenancy
, but wanted to double check if there was a reason for removing service/dataproc
previously.
I must have clicked on the wrong label? +1 that this is clearly dataproc, not compute-sole-tenancy
to amend this ticket, the most common test error message is the following, the quota issue being relatively rare:
Error: Error waiting for creating Dataproc cluster: Error code 9, message: Instance could not be scheduled due to no matching node with property compatibility.
Explanation:
The matching node group(s) <test-nodegroup-randomsuffix> do not match the intance's machine family type.
Impacted tests:
Affected Resource(s)
Nightly builds:
Message:
b/299683841