Closed tssurya closed 3 months ago
Name | Link |
---|---|
Latest commit | a10aed5ded41aa02e371ca79c68683628b10df20 |
Latest deploy log | https://app.netlify.com/sites/kubernetes-sigs-network-policy-api/deploys/660da0ed70486d0008cbe360 |
Deploy Preview | https://deploy-preview-215--kubernetes-sigs-network-policy-api.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
3. Unsure thing: I changed all other array types like "rules" and "peers" and "ports" to
+listType=atomic
which I probably should not and maybe we just leave them all alone?
I'm not sure... where are the docs for all this stuff?
- Unsure thing: I changed all other array types like "rules" and "peers" and "ports" to
+listType=atomic
which I probably should not and maybe we just leave them all alone?I'm not sure... where are the docs for all this stuff?
https://book.kubebuilder.io/reference/markers/crd-processing is where it describes the 3 types:
+listType
string
specifies the type of data-structure that the list represents (map, set, atomic).
Possible data-structure types of a list are:
"map": it needs to have a key field, which will be used to build an associative list. A typical example is a the pod container list, which is indexed by the container name.
"set": Fields need to be "scalar", and there can be only one occurrence of each.
"atomic": All the fields in the list are treated as a single value, are typically manipulated together by the same actor.
I saw the newer CRDs in ocp/api are doing this more strictly was wondering if we should do it as well here. I tried to check https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#patch-operations but there they don't mention much,
I saw https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#example-2 which specifies the patchStrategy:merge
for the array which is something we might wanna consider for our 100 rules array? (the qe engineer testing the feature d/s mentioned its annoying to always have to pass the entire list of ingress/egress rules without being able to do strategic merge)
the CI is failing because the standard CRDs don't have nodes
, networks
, namedPorts
. So that CEL validation has to be removed for standard CRDs UGH.
Hrmmm @tssurya We should be able to fix that by tweaking https://github.com/kubernetes-sigs/network-policy-api/blob/main/pkg/generator/main.go#L134
So that if the // Experimental tag is above the other tags it removes them? (Maybe) I'd have to experiment
@astoycos : I had to add a 3rd commit to fix some stuff and have validated this, it works now BUT
with my original PR because we placed the validation on a single rule we would get:
Error from server (Invalid): error when creating "anp.yaml": AdminNetworkPolicy.policy.networking.k8s.io "node-as-egress-peers" is invalid: spec.egress[0]: Invalid value: "object": networks/nodes peer cannot be set with namedPorts since there are no namedPorts for networks/nodes
while now we get:
Error from server (Invalid): error when creating "anp.yaml": AdminNetworkPolicy.policy.networking.k8s.io "node-as-egress-peers" is invalid: spec.egress: Invalid value: "array": networks/nodes peer cannot be set with namedPorts since there are no namedPorts for networks/nodes
see how the first one tells you which rule is exactly wrong (using indexes) ? if ppl have 30 rules I think its really helpful doing that way rather than putting a rule for all the 100 rules together. I get that we want to have a smooth channel removal logic but this is too much a price to pay because it can make the end users get a better and more useful error?
I feel we can in the future enhance that hard coding, but I'd rather we don't compromise on the error readability for that, wdyt?
also: https://github.com/kubernetes-sigs/network-policy-api/pull/215/commits/2ba4a83ef85f54934db244a9658f79a36c52568e#diff-f1e3b23db0005734f27dc973d65454b986ebf65de76a1879c3bf55f8ef0b0b52R53 looks wrong? i am not sure why that is a +
versus usually we have -
/lgtm /approve
Thanks for doing this @tssurya
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: astoycos, tssurya
The full list of commands accepted by this bot can be found here.
The pull request process is described here
tysm @astoycos : team work!
This PR does three things out of which I am unsure about the last thing here, cc @danwinship could use your help or other api-experts here
The AdminNetworkPolicy "node-as-egress-peers" is invalid: spec.egress[0]: Invalid value: "object": networks peer cannot be set with namedPorts since there are no namedPorts for networks
}as for do we need to also do the same restriction for nodes peer?, node object itself actually doesn't have namedPorts, the host-networked pods that are on that node will have the namedPorts but that seems like a case we don't need to bother about, so perhaps namedPorts should be applicable only to namespaces and pods peers?
networks
peer which is scalar to using+listType=set
which based on the docs seems like the right thing to do+listType=atomic
which I probably should not and maybe we just leave them all alone? => confusion stems from; can we insert a new egress/ingress rule in the middle without having to totally pull all the existing 51 rules making a change and then updating that whole batch back? I was trying to see how to add the right syntax to achieve a "merge" for a list of rules...