netmod-wg / opstate-reqs

draft-chairs-netmod-opstate-reqs
1 stars 0 forks source link

Support for situations when structure of intended configuration is not the same as applied #5

Open cmoberg opened 9 years ago

cmoberg commented 9 years ago

This is in regards to section 2.1.C in draft-chairs-netmod-opstate-reqs-00:

C. The data model for the applied configuration is the same as the data model for the intended configuration (same leaves)

While the content of an intended configuration leaf can be seen to represent the content of a datastore, the applied configuration is described to reflect operational state in the running system.

The assumption of a 1:1 mapping ignores situations where a change to an intended configuration leaf value may result in several instances of applied configuration leaf values (operational state) to be updated in the backend framework across several subsystems.

The operational state of this set of distributed values may converge over different time deltas and there may even be situations where there is only partial convergence (i.e. some subsystems don't accept the intended value and does not transfer it to applied). This would be especially pronounced in asynchronous systems.

Suggest to discuss relaxing the requirement to say something along the lines of:

C.  The data model for the intended configuration has a related
    data model for the applied configuration
mjethanandani commented 9 years ago

+1.

Most of the current discussion seems to center around a flag that is a representation of whether a configuration has been written or not. But I believe the requirement is more than that. It is, as Carl suggests, a culmination of different set of subsystems that together represent the applied state.

einarnn commented 9 years ago

I think that before relaxing this requirement it would be sensible to have some concrete examples of where this issue is actually exhibited and determine if the intended vs applied simple mirroring is, indeed, not possible. What I would caution is that while the problem of state entanglement across components at the backend is real, the goal we should be looking at is how to make that simpler for the operator.

Happy to discuss this more.

cmoberg commented 9 years ago

As mentioned on email, we have made an informal inquiry across Juniper, Alcatel-Lucent and cisco around this requirement. And the feedback is that none of them have management frameworks that would be able to support this 1:1 mapping in a meaningful fashion.

On the other end of the spectrum I can give you one example from some work we have done with mapping YANG models onto commercial networking stacks. This stack has an SNMP-based integration interface, so setting up a single value (e.g. a route redistribution filter type) entails poking two locations in the backend data tree. Now, in an asynchronous system, updating these two values could [1] take relatively different amounts of time meaning that the two backend values would be different for a while, and [2] could partially fail (i.e. only one value set in the backend).

We need to think about what the "applied" value of the intended parameter would mean in situations [1] and [2] above.

einarnn commented 9 years ago

That any existing management framework doesn't support what would, in effect, be a new capability isn't all that surprising. I don't think that should result in a requirement change or relaxation on its own.

In that case of [1], the applied configuration would not match the intended configuration until both of your relevant backend tasks had completed successfully.

If the end result is [2], then the applied configuration will not match the intended configuration, and some form of error would be signaled.

The existing proposal in the opstate draft, if I recall correctly, does not provide for "errors" as such, but the proposal in draft-wilton-netmod-opstate-yang does allow for error metadata to be returned in "cfg-status-reason", indicating why applied != intended (may need some more formalizing). This metadata could also be used to reflect the issues in [1]. Looking at draft-kwatsen-netmod-opstate, it provides the "related-state" extension, but, in its current form, this extension doesn't really help the end user figure out the the related state is "broken", leaving that still as an exercise for the end user.

cmoberg commented 9 years ago

It is surprising, since this is standards work. Writing manageability standards that none of that major vendors can support does not have a good track record. YANG was carefully developed the other way around, let's make sure that we can use it to describe what is actually in place.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

For [2], ok.

The Wilton and Watsen drafts misses the goal, since it's specific to NETCONF/RESTCONF which is explicitly excluded in the opstate draft. That's why the opstate team wants this in the language, and not in the protocol.

einarnn commented 9 years ago

It is surprising, since this is standards work. Writing manageability standards that none of that major vendors can support does not have a good track record. YANG was carefully developed the other way around, let's make sure that we can use it to describe what is actually in place.

I have no objection to describing what is in place as such, but what is in place today in most management systems is a complex piece of logic that takes the intended configuration state, what the running-config on a device actually is when queried and a bunch of operational state to try and determine if the box is actually doing what it was asked to do.

The way I am interpreting the ask from the openconfig team is that they would really like to see this complexity reduced significantly from an operator's perspective.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

I think there are a variety of answers possible here. Probably not exhaustive, but:

I think that any solution needs to define this in a way that allows some form of metadata to be provided to indicate that the current state is "in-progress". Also, clients have to be prepared to accept asynchronous changes to the "applied config state" (or whatever it becomes), which could possibly distributed using a mechanism such as draft-clemm-netconf-yang-push.

abierman commented 9 years ago

On Mon, Sep 21, 2015 at 1:03 AM, Carl Moberg notifications@github.com wrote:

It is surprising, since this is standards work. Writing manageability standards that none of that major vendors can support does not have a good track record. YANG was carefully developed the other way around, let's make sure that we can use it to describe what is actually in place.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

For [2], ok.

The Wilton and Watsen drafts misses the goal, since it's specific to NETCONF/RESTCONF which is explicitly excluded in the opstate draft. That's why the opstate team wants this in the language, and not in the protocol.

There is no reason the requirements cannot be applied to a different protocol. IMO, unspecified secret protocols are out of scope, but even if they were not, it is quite possible to design a protocol operation that only retrieves data with the desired properties. Nothing new or difficult about that.

— Reply to this email directly or view it on GitHub https://github.com/netmod-wg/opstate-reqs/issues/5#issuecomment-141903237 .

cmoberg commented 9 years ago

On Sep 21, 2015, at 3:17 PM, Andy Bierman notifications@github.com wrote:

On Mon, Sep 21, 2015 at 1:03 AM, Carl Moberg notifications@github.com wrote:

It is surprising, since this is standards work. Writing manageability standards that none of that major vendors can support does not have a good track record. YANG was carefully developed the other way around, let's make sure that we can use it to describe what is actually in place.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

For [2], ok.

The Wilton and Watsen drafts misses the goal, since it's specific to NETCONF/RESTCONF which is explicitly excluded in the opstate draft. That's why the opstate team wants this in the language, and not in the protocol.

There is no reason the requirements cannot be applied to a different protocol. IMO, unspecified secret protocols are out of scope, but even if they were not, it is quite possible to design a protocol operation that only retrieves data with the desired properties. Nothing new or difficult about that.

The reason I bring this up as an issue is that I agree with the above. The requirements are such that in my opinion we should consider pushing them to the protocol. But the opstate draft that we have been asked to comment on explicitly suggests to solve this in the language domain.

cmoberg commented 9 years ago

On Sep 21, 2015, at 2:13 PM, Einar Nilsen-Nygaard notifications@github.com wrote:

It is surprising, since this is standards work. Writing manageability standards that none of that major vendors can support does not have a good track record. YANG was carefully developed the other way around, let's make sure that we can use it to describe what is actually in place.

I have no objection to describing what is in place as such, but what is in place today in most management systems is a complex piece of logic that takes the intended configuration state, what the running-config on a device actually is when queried and a bunch of operational state to try and determine if the box is actually doing what it was asked to do.

What current generation applications do, in my experience, is that they:

  1. Change the configuration (e.g. add a new neighbor)
  2. …and then check for the expected behavior (e.g. is the BGP neighbor state = “established”)

    But please note that BGP neighbor state is “derived state” per the draft. So the example in an opstate-compliant implementation it would be:

  3. Change the intended configuration (e.g. add a new neighbor)
  4. Check the applied configuration (e.g. check that the new neighbor configuration is installed in the BGP subsystem/daemon)
  5. …and then check for the derived state (e.g. is the BGP neighbor state = “established”)

    The applied configuration won’t tell the management application anything about the actual behavior of the subsystem (e.g. state machine status, packet counters on interfaces, etc).

The way I am interpreting the ask from the openconfig team is that they would really like to see this complexity reduced significantly from an operator's perspective.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

I think there are a variety of answers possible here. Probably not exhaustive, but:

• No value, because the sequence of operations required has not completed...but that would require that implementations actually fully understand backend state entanglement.

• The value "so far", by which I mean it may reflect the result of any segments of the application of config that have completed.

I think the above needs to be well understood and agreed upon in the WG.

I think that any solution needs to define this in a way that allows some form of metadata to be provided to indicate that the current state is "in-progress". Also, clients have to be prepared to accept asynchronous changes to the "applied config state" (or whatever it becomes), which could possibly distributed using a mechanism such as draft-clemm-netconf-yang-push.

Since the WG does not have any specific protocol mappings in mind, many of the issues around this draft is that we push “features” into the language that could be solved in the protocol. This ‘applied state’ issue could be somewhat trivially solved in NETCONF/RESTCONF by adding RPC metadata or a new data store. But since the upstate draft assumes non-NETCONF that simply does not help unless that fundamental assumption changes.

einarnn commented 9 years ago

What current generation applications do, in my experience, is that they:

  1. Change the configuration (e.g. add a new neighbor)
  2. ...and then check for the expected behavior (e.g. is the BGP neighbor state = “established”)

But please note that BGP neighbor state is “derived state” per the draft. So the example in an opstate-compliant implementation it would be:

  1. Change the intended configuration (e.g. add a new neighbor)
  2. Check the applied configuration (e.g. check that the new neighbor configuration is installed in the BGP subsystem/daemon)
  3. ...and then check for the derived state (e.g. is the BGP neighbor state = “established”)

[2] and [3] are not linked. Derived state is still just derived state. Why would an opstate-compliant implementation check the derived state as part of applying configuration? It shouldn't. What is important in terms of the difference between intended and applied is whether or not the correct control plane configuration for BGP was established as far as the operator is concerned. Whether the BGP neighbor came up or not is important, but doesn't affect this.

The applied configuration won’t tell the management application anything about the actual behavior of the subsystem (e.g. state machine status, packet counters on interfaces, etc).

Correct, applied will equal intended, at least telling the operator that, as far as the device is concerned, it is configured in the way the operator asked it to be configured. It's not meant to tell you about derived state. That is still required.

This is potentially important as what you missed out from your first description is that provisioning systems will, today, often read back the configuration they tried to submit to make sure that the configuration they asked to be established was actually established.

The way I am interpreting the ask from the openconfig team is that they would really like to see this complexity reduced significantly from an operator's perspective.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

I think there are a variety of answers possible here. Probably not exhaustive, but:

  • No value, because the sequence of operations required has not completed...but that would require that implementations actually fully understand backend state entanglement.
  • The value "so far", by which I mean it may reflect the result of any segments of the application of config that have completed.

I think the above needs to be well understood and agreed upon in the WG.

I completely agree.

I think that any solution needs to define this in a way that allows some form of metadata to be provided to indicate that the current state is "in-progress". Also, clients have to be prepared to accept asynchronous changes to the "applied config state" (or whatever it becomes), which could possibly distributed using a mechanism such as draft-clemm-netconf-yang-push.

Since the WG does not have any specific protocol mappings in mind, many of the issues around this draft is that we push “features” into the language that could be solved in the protocol. This ‘applied state’ issue could be somewhat trivially solved in NETCONF/RESTCONF by adding RPC metadata or a new data store. But since the upstate draft assumes non-NETCONF that simply does not help unless that fundamental assumption changes.

IIRC, in the interim meeting Rob Shakir did talk about the possibility of defining a set of requirements on whatever the underlying transport and encoding is. While there is obviously some dislike of NETCONF, for whatever reason, it seems that there is an acceptance that not everything can dbe defined within the model.

abierman commented 9 years ago

On Mon, Sep 21, 2015 at 6:34 AM, Carl Moberg notifications@github.com wrote:

On Sep 21, 2015, at 2:13 PM, Einar Nilsen-Nygaard < notifications@github.com> wrote:

It is surprising, since this is standards work. Writing manageability standards that none of that major vendors can support does not have a good track record. YANG was carefully developed the other way around, let's make sure that we can use it to describe what is actually in place.

I have no objection to describing what is in place as such, but what is in place today in most management systems is a complex piece of logic that takes the intended configuration state, what the running-config on a device actually is when queried and a bunch of operational state to try and determine if the box is actually doing what it was asked to do.

What current generation applications do, in my experience, is that they:

  1. Change the configuration (e.g. add a new neighbor)
  2. …and then check for the expected behavior (e.g. is the BGP neighbor state = “established”)

But please note that BGP neighbor state is “derived state” per the draft. So the example in an opstate-compliant implementation it would be:

  1. Change the intended configuration (e.g. add a new neighbor)
  2. Check the applied configuration (e.g. check that the new neighbor configuration is installed in the BGP subsystem/daemon)
  3. …and then check for the derived state (e.g. is the BGP neighbor state = “established”)

The applied configuration won’t tell the management application anything about the actual behavior of the subsystem (e.g. state machine status, packet counters on interfaces, etc).

So what is the 1 leaf that is configured by the client that shows up in applied config? Since the client does not actually configure BGP neighbor-state, what leaf is used as the test that the entire route was updated correctly?

The premise is that the solution will be generic

configure /foo/bar/config/baz

read /foo/bar/state/baz until it equals /foo/bar/config/baz

IMO it will be quite interesting to see if all the router vendors spend $$$ to add generic support for this problem. The money might be better spent speeding up these systems that are taking minutes and hours to activate config.

The way I am interpreting the ask from the openconfig team is that they would really like to see this complexity reduced significantly from an operator's perspective.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

I think there are a variety of answers possible here. Probably not exhaustive, but:

• No value, because the sequence of operations required has not completed...but that would require that implementations actually fully understand backend state entanglement.

• The value "so far", by which I mean it may reflect the result of any segments of the application of config that have completed.

I think the above needs to be well understood and agreed upon in the WG.

I think that any solution needs to define this in a way that allows some form of metadata to be provided to indicate that the current state is "in-progress". Also, clients have to be prepared to accept asynchronous changes to the "applied config state" (or whatever it becomes), which could possibly distributed using a mechanism such as draft-clemm-netconf-yang-push.

Since the WG does not have any specific protocol mappings in mind, many of the issues around this draft is that we push “features” into the language that could be solved in the protocol. This ‘applied state’ issue could be somewhat trivially solved in NETCONF/RESTCONF by adding RPC metadata or a new data store. But since the upstate draft assumes non-NETCONF that simply does not help unless that fundamental assumption changes.

It would be an interesting precedent to change the YANG charter so it must work with any unspecified and/or proprietary protocol that claims to use YANG. Usually we focus on achieving standards-based interoperability, which is hard enough.

Reply to this email directly or view it on GitHub https://github.com/netmod-wg/opstate-reqs/issues/5#issuecomment-141979253 .

Andy

einarnn commented 9 years ago

The applied configuration won’t tell the management application anything about the actual behavior of the subsystem (e.g. state machine status, packet counters on interfaces, etc).

So what is the 1 leaf that is configured by the client that shows up in applied config? Since the client does not actually configure BGP neighbor-state, what leaf is used as the test that the entire route was updated correctly?

By the "entire route", do you also mean to include the derived state of the BGP neighbor itself?

The intended vs applied just applies to the config you are trying to put on the box. The applied config can match the intended config even if the BGP neighbor state stays unestablished, right?

Intended vs applied can tell you that the BGP control plane on the router you tried to put config on didn't "work" in the sense that the BGP control plane machinery is, for some reason (out of memory, port used by another application, whatever), not able to bring itself to the state you have asked for.

Maybe I've missed something in what you were say?

abierman commented 9 years ago

On Mon, Sep 21, 2015 at 2:46 PM, Einar Nilsen-Nygaard < notifications@github.com> wrote:

The applied configuration won’t tell the management application anything about the actual behavior of the subsystem (e.g. state machine status, packet counters on interfaces, etc).

So what is the 1 leaf that is configured by the client that shows up in applied config? Since the client does not actually configure BGP neighbor-state, what leaf is used as the test that the entire route was updated correctly?

By the "entire route", do you also mean to include the derived state of the BGP neighbor itself?

The intended vs applied just applies to the config you are trying to put on the box. The applied config can match the intended config even if the BGP neighbor state stays unestablished, right?

Intended vs applied can tell you that the BGP control plane on the router you tried to put config on didn't "work" in the sense that the BGP control plane machinery is, for some reason (out of memory, port used by another application, whatever), not able to bring itself to the state you have asked for.

Maybe I've missed something in what you were say?

The proposed openconfig solution (I think) is to mirror the BGP config as operational state:

grouping bgp-config { .... }

container config {
   uses bgp-config;
}

container state {
   config false;
   uses bgp-config;
}

Does the solution work for your use-case? How is the config-mirrored-as-operational-state tell you when the desired state is also the operational state?

Andy

Reply to this email directly or view it on GitHub https://github.com/netmod-wg/opstate-reqs/issues/5#issuecomment-142119568 .

cmoberg commented 9 years ago

On Sep 21, 2015, at 7:26 PM, Einar Nilsen-Nygaard notifications@github.com wrote:

What current generation applications do, in my experience, is that they:

• Change the configuration (e.g. add a new neighbor) • ...and then check for the expected behavior (e.g. is the BGP neighbor state = “established”) But please note that BGP neighbor state is “derived state” per the draft. So the example in an opstate-compliant implementation it would be:

• Change the intended configuration (e.g. add a new neighbor) • Check the applied configuration (e.g. check that the new neighbor configuration is installed in the BGP subsystem/daemon) • ...and then check for the derived state (e.g. is the BGP neighbor state = “established”) [2] and [3] are not linked. Derived state is still just derived state. Why would an opstate-compliant implementation check the derived state as part of applying configuration? It shouldn't. What is important in terms of the difference between intended and applied is whether or not the correct control plane configuration for BGP was established as far as the operator is concerned. Whether the BGP neighbor came up or not is important, but doesn't affect this.

In my experience, the reason for management applications checking the related derived state is to figure out if the reason for actually making the change has now been fulfilled. The application logic is commonly such that the reason for e.g. configuring a new neighbor in the first place is associated with the higher level assumption that prefixes will be exchanged.

The applied configuration won’t tell the management application anything about the actual behavior of the subsystem (e.g. state machine status, packet counters on interfaces, etc).

Correct, applied will equal intended, at least telling the operator that, as far as the device is concerned, it is configured in the way the operator asked it to be configured. It's not meant to tell you about derived state. That is still required. This is potentially important as what you missed out from your first description is that provisioning systems will, today, often read back the configuration they tried to submit to make sure that the configuration they asked to be established was actually established.

I don’t think I know of any systems that performs a configuration task, immediately followed by the equivalent of a “show running” on the same parameters that were just successfully set, just to make sure they are now part of the running configuration. In our informal inquiry among router vendors, we found that the “committed” configuration (i.e. the intended config) ends up in the same datastore that a “show config” would pull data from. So there is literally no way they practically can differ afaik.

The way I am interpreting the ask from the openconfig team is that they would really like to see this complexity reduced significantly from an operator's perspective.

For [1] above, what would the value of the applied configuration be while the backend tasks are not completed?

I think there are a variety of answers possible here. Probably not exhaustive, but:

• No value, because the sequence of operations required has not completed...but that would require that implementations actually fully understand backend state entanglement.

• The value "so far", by which I mean it may reflect the result of any segments of the application of config that have completed.

I think the above needs to be well understood and agreed upon in the WG.

I completely agree.

Ok.

I think that any solution needs to define this in a way that allows some form of metadata to be provided to indicate that the current state is "in-progress". Also, clients have to be prepared to accept asynchronous changes to the "applied config state" (or whatever it becomes), which could possibly distributed using a mechanism such as draft-clemm-netconf-yang-push.

Since the WG does not have any specific protocol mappings in mind, many of the issues around this draft is that we push “features” into the language that could be solved in the protocol. This ‘applied state’ issue could be somewhat trivially solved in NETCONF/RESTCONF by adding RPC metadata or a new data store. But since the upstate draft assumes non-NETCONF that simply does not help unless that fundamental assumption changes.

IIRC, in the interim meeting Rob Shakir did talk about the possibility of defining a set of requirements on whatever the underlying transport and encoding is. While there is obviously some dislike of NETCONF, for whatever reason, it seems that there is an acceptance that not everything can dbe defined within the model.

Yep, and that delineation needs to be made part of the work at hand.

einarnn commented 9 years ago

Andy,

The proposed openconfig solution (I think) is to mirror the BGP config as operational state:

grouping bgp-config { .... }

container config { uses bgp-config; }

container state { config false; uses bgp-config; }

Does the solution work for your use-case? How is the config-mirrored-as-operational-state tell you when the desired state is also the operational state?

If by "operational state" you mean "the BGP neighbor I configured is now established", then it's not meant to solve that problem. That would need derived operational state, based on the configured BGP neighbor successfully establishing a session.

If by "operational state" you mean "my local BGP daemon is now configured such that I can expect to be able to establish a session with the configured neighbor", then it can achieve that if you have a "compliant implementation".

Cheers,

Einar

abierman commented 9 years ago

On Mon, Sep 21, 2015 at 5:28 PM, Einar Nilsen-Nygaard < notifications@github.com> wrote:

Andy,

The proposed openconfig solution (I think) is to mirror the BGP config as operational state:

grouping bgp-config { .... }

container config { uses bgp-config; }

container state { config false; uses bgp-config; }

Does the solution work for your use-case? How is the config-mirrored-as-operational-state tell you when the desired state is also the operational state?

If by "operational state" you mean "the BGP neighbor I configured is now established", then it's not meant to solve that problem. That would need derived operational state, based on the BGP neighbors successfully establishing a session.

This seems like the operationally useful query. I guess the definition of "applied config" needs lots of explanation. Is it applied to the network or not? I thought that was the point of this exercise, but perhaps not.

If by "operational state" you mean "my local BGP daemon is now configured

such that I can expect to be able to establish a session with the configured neighbor", then it can achieve that you you have a "compliment implementation".

How long does it take a router to respond "I accepted your edit request". Should be less than a second, right? Why would it take minutes or hours for a router to just accept the edit? The premise of all 3 solutions is that the difference in state is of a long duration, such that polling the server for intermediate status is needed.

Cheers,

Einar

Andy

— Reply to this email directly or view it on GitHub https://github.com/netmod-wg/opstate-reqs/issues/5#issuecomment-142146447 .

einarnn commented 9 years ago

If by "operational state" you mean "the BGP neighbor I configured is now established", then it's not meant to solve that problem. That would need derived operational state, based on the BGP neighbors successfully establishing a session.

This seems like the operationally useful query.

Yes, it's useful, but I'm thinking of this as a combination of the applied state being what was asked for and the BGP neighbor state being established. The applied config can be correct, but the BGP neighbor can still be down for any numbe of reasons. I don't think we should be trying to squeeze two things into one, should we?

I guess the definition of "applied config" needs lots of explanation. Is it applied to the network or not? I thought that was the point of this exercise, but perhaps not.

I'm looking at the goal of whatever solution is finally taken forward being to make it easier to determine that the network doing what we want it to. This applied vs intended thing we are discussing here is only part of the overall solution space, and I do believe that there is a difference between intended and applied

If by "operational state" you mean "my local BGP daemon is now configured such that I can expect to be able to establish a session with the configured neighbor", then it can achieve that if you have a "compliant implementation".

How long does it take a router to respond "I accepted your edit request". Should be less than a second, right? Why would it take minutes or hours for a router to just accept the edit? The premise of all 3 solutions is that the difference in state is of a long duration, such that polling the server for intermediate status is needed.

I've not read anything anywhere that suggests a polling-based solution is particularly desired. I guess polling could be used, but I don't see it as very effective, and I'd be surprised if that was a goal.

I don't think we can make the assumption that intended config state always makes it correctly into the operational config state of a device. This can be for a number of reasons, and so the ability to relatively easily see that difference somehow, and, if possible, be informed of why the delta exists, seems useful to me. I'm not saying any one of the three proposals is the "right" solution, but I do think this is a useful problem to address.

Cheers,

Einar

abierman commented 9 years ago

On Mon, Sep 21, 2015 at 6:01 PM, Einar Nilsen-Nygaard < notifications@github.com> wrote:

If by "operational state" you mean "the BGP neighbor I configured is now established", then it's not meant to solve that problem. That would need derived operational state, based on the BGP neighbors successfully establishing a session.

This seems like the operationally useful query.

Yes, it's useful, but I'm thinking of this as a combination of the applied state being what was asked for and the BGP neighbor state being established. The applied config can be correct, but the BGP neighbor can still be down for any numbe of reasons. I don't think we should be trying to squeeze two things into one, should we?

I guess the definition of "applied config" needs lots of explanation. Is it applied to the network or not? I thought that was the point of this exercise, but perhaps not.

I'm looking at the goal of whatever solution is finally taken forward being to make it easier to determine that the network doing what we want it to. This applied vs intended thing we are discussing here is only part of the overall solution space, and I do believe that there is a difference between intended and applied

If by "operational state" you mean "my local BGP daemon is now configured such that I can expect to be able to establish a session with the configured neighbor", then it can achieve that if you have a "compliant implementation".

How long does it take a router to respond "I accepted your edit request". Should be less than a second, right? Why would it take minutes or hours for a router to just accept the edit? The premise of all 3 solutions is that the difference in state is of a long duration, such that polling the server for intermediate status is needed.

I've not read anything anywhere that suggests a polling-based solution is particularly desired. I guess polling could be used, but I don't see it as very effective, and I'd be surprised if that was a goal.

All of the solutions offer data for the client to read. Even if a notification-based solution is used, there is an assumption that the difference between "here is my edit" and "I accepted your edit" takes a long time. I have not seen that, even in asynchronous servers.

I don't think we can make the assumption that intended config state always makes it correctly into the operational config state of a device. This can be for a number of reasons, and so the ability to relatively easily see that difference somehow, and, if possible, be informed of why the delta exists, seems useful to me. I'm not saying any one of the three proposals is the "right" solution, but I do think this is a useful problem to address.

I don't know that the delay between "edit running" and "show running" is long enough to worry about for 99% of the data. IMO the right solution will not impact that 99%.

Cheers,

Einar

Andy

— Reply to this email directly or view it on GitHub https://github.com/netmod-wg/opstate-reqs/issues/5#issuecomment-142150645 .

ggrammel commented 9 years ago

if we had an explicit "verify" action that would return a qualified response such as "intended config" == "applied config" and corresponding opstate is consistent, much of above might go away. One even may want to limit the "verify" to perform only a subset of qualification. "verify" could be something the server can perform locally.

What's more, from a 10.000ft view "synchronous" looks awfully like a state change with a subsequent "verify" while "asynchronous" is a state change without verification. Not 100% accurate as a comparison but food for thought.