Closed duglin closed 5 years ago
If there would be any way for an invalid user's request to be batched into a valid user's request, then this would be a security exposure. This batching behavior would depend on when authentication/authorization is performed.
I could also see a case where updating a service instance's plan could be dependent on a a service instance's parameters. If these two operations were batched, I could see both users surprised by the outcome in either ordering of the updates.
From my point of view, batching is unnecessary. Just flow the two requests sequentally to the broker. Then you get correct auditing and correct behavior.
From my point of view, batching is unnecessary. Just flow the two requests sequentally to the broker. Then you get correct auditing and correct behavior.
The root issue is in k8s you set the spec to how you would like the environment to be eventually. The full flow we are talking about is as follows:
So we are not talking about a platform optimization, we are talking about User B being accredited to a change that includes previous users changes to the same spec. This has some consequences that are a result of OSBAPI choices and how k8s works.
If we are modeling a government style audit log through the OSBAPI, I would have to agree there there should be no batching allowed*.
If the assumption is that OSBAPI is being used through k8s, then I don't agree that you can call the update from User B as a batch of User A+B only because User B had the opportunity to review User A's changes, made a choice to accept bar, and added baz.
* If the issue is that there is a second platform on-top of k8s and the top most platform is pushing parameter changes into the k8s spec with the assumption that k8s will serialize the requests per user per change, then perhaps this is a mis-use of k8s and a more correct solution could be to make parameters a list of changes associated to a user identity.
Alternatively, if this batching is truly a violation of the OSBAPI spec and there can be no agreement that User B is allowed to agree to and absorb User A's changes, the k8s implementation will have to change dramatically to allow the UX users want.
If there would be any way for an invalid user's request to be batched into a valid user's request, then this would be a security exposure
If we are modeling a government style audit log through the OSBAPI, I would have to agree there there should be no batching allowed
I think these two comments highlight the concerns. When you have critical business processes in the broker checking/auditing things based on the identity of the originating user then we run the risk of incorrect decisions being made.
Merging update requests can lead to unintentional side-effects. A wrong audit log is just one of them. But there can also be functional side-effects in the broker. It’s certainly not in the intention of the spec.
Can we be more specific on the types of operations that can be merged? In general, I'm very concerned if we conflate the identity of who performed actions because then we loose the audit log (imagine AWS CloudTrail functionality). In general for regulated workloads, it is important to be able to clearly log who performed a change, what change they performed, and when. From the sound of this issue, that clear tracking could be muddied.
But there can also be functional side-effects in the broker.
To me it's a sign of a poorly designed broker. If the broker is provided with a final desired updated state of the instance, I don't see a reason why would broker break on processing such update request.
it is important to be able to clearly log who performed a change, what change they performed, and when.
@bkmartin based on how Kubernetes UX works by default, this information will be logged on the Kubernetes / Service Catalog side, but the parameters that will be received by a broker can be batched and associated with the latest user who made an update request. While this means loosing detailed audit log on the broker side, I don't see any reason why processing of such request would break on the broker side.
This comment might be better suited for the svc-cat repo but since it could apply to any Platform author and its about the expectations we had when we added this header to the spec I decided to put it here... (and sorry its so long)
I think its fair to say that the usecases we're concerned about involve situations where a Service Instance is modified very often. In cases where they're only occasionally modified, I don't think any of this is really much of an issue because you'll only be dealing with a single user and probably a single set of changes to an Instance at a time.
So, in cases where an Instance is touched often my guess is that while all of these changes could be due to individual users manually doing the equivalent of a kubectl edit ServiceInstance...
at the same time, and doing it often, I actually think this action would most likely be hidden behind some kind of tooling or, if driven by a user, might be behind a script. If it were me and I needed to touch an Instance's "foo" property often it wouldn't be long before I created a modifyFoo.sh
file so I could just do modifyFoo.sh myInstance 500
. And it would do a GET, modify foo
, and PUT for me.
Whether in any one person's case they actually do that or not doesn't matter, the point is that its not unreasonable to think that people will do so - people like tools and automation. I bring this up because some people are saying that in the K8s case the 2nd user to modify an Instance will see the 1st user's edits and review/approve/take-ownership of them. In cases where they're doing this often I suspect the fact that they do it often would actually cause them to review the existing state less carefully because of this automation/tooling aspect.
So, imagine the case where a less-authorized user modified an Instance and the Broker rejected it - e.g. they asked for 1000 VMs. During that time a more-authorized user (perhaps an admin type of person who does this stuff often so they wrote a utility script) runs their script and modifies a different property on the Instance - e.g. logging level because they were debugging something. It get accepted by the Broker and due to their more elevated privileges it include's user1's edits. In a world where the identity of the person associated with each change matters (which is the reason the Broker is monitoring the header) this flow would be unacceptable and lead to potentially very serious negative consequences.
My point is that changes made to an Instance need to be explicitly accepted (or done) by a user in order for this Header to be able to work properly. "Assuming" people do things like review all properties of an Instance for correctness even when all they're focused on is one property, is a very risky thing for a Platform to do. So, while I agree that the OSB spec should not mandate how a Platform defines who the user is, the Platform must make a choice that is consistent with the purpose behind why this Header was added in the first place.
I bring this up because some people are saying that in the K8s case the 2nd user to modify an Instance will see the 1st user's edits and review/approve/take-ownership of them.
We don't care whether a physical person actual saw something with their eyes or not, we can't control that.
We just claim that a Kubernetes user can see the entire spec it updates (and if user tries to update stale spec, the request will be rejected with 409 Conflict
), and this user takes the responsibility for the entire spec by design.
changes made to an Instance need to be explicitly accepted (or done) by a user
They are explicitly accepted by a user, by design. That's how the whole Kubernetes ecosystem works, and I don't see any reason to question that for Service Catalog.
P.S. I still fail to understand why do we discuss implementation details of what is not part of the OSB contract here. All that Service Catalog has to do to be OSB compliant is to send a valid OSB request with a valid user, and how this request was produced is not a concern of OSB.
They are explicitly accepted by a user, by design. That's how the whole Kubernetes ecosystem works, and I don't see any reason to question that for Service Catalog.
Theory vs reality. In theory since they can see all edits, sure. In reality, see my script example. And when the Header value has serious business impact, reality is all that matters.
The entire "by design" aspect is something I'm not sure I can agree with since K8s also by design assumes that once something is written to the Spec its been verified to some extent (e.g. the user is authorized to do so, values "seem" ok, etc...) and anything beyond that point in time doesn't need to be associated with the original user and the diff of edits do not matter. Whether you want to call it "batching" or "last one wins", its the same net effect for the downstream processing - critical data is now lost and the Broker is unable to do it job properly. We can't ignore this.
Closing as hopefully #476 resolves this issue
Let's say there's a platform (like K8s) that has the potential for a delay between the time someone updates an Instance in K8s and the time the osbapi.update() request is sent to the Broker managing the Instance. And during that delay a 2nd user updates some other aspect (non-overlapping update) of the Instance in K8s. Would it be consistent with the OSB API spec if:
1 - only one osbapi.update() is sent to the Broker and it includes both sets of changes batched together 2 - only the 2nd user's identity is included in the Originating Identity Header with this batched update
In particular due to this line in the spec:
If present on a request, the OriginatingIdentity header MUST contain the identify information for the Platform's user that took the action to cause the request to be sent.