napalm-automation-community / napalm-srlinux

NAPALM driver for Nokia SR Linux
Apache License 2.0
5 stars 12 forks source link

Testing - Containerlab and Mocking / Stubbing #57

Open tardoe opened 3 weeks ago

tardoe commented 3 weeks ago

Opening this issue to begin discussion on how we should handle testing with v2. Currently the unit tests return mocked data and integration tests require containerlab to be spun up.

I propose the following:

@hellt thoughts / ideas / comments / discussion?

jbemmel commented 3 weeks ago

I've always considered the live integration testing with Containerlab an important differentiator for the plugin. After having Containerlab installed and downloading the Docker image, the tests can be performed "offline" and are entirely optional. CI/CD workflows on GitHub spin up Containerlab instances internally

Having a 'snapshot' of JSONRPC output isn't as comprehensive as being able to spin up any release you want, with any node configuration / topology, and then running regression testing against that. I'd expect more bugs to be found in the area of specific (combinations of) config options and/or race conditions.

In short, I'm not sure what problem you'd be solving?

tardoe commented 3 weeks ago

@jbemmel there would still be the option of running the tests locally, but having the option to run tests easily (e.g. for small fixes) without needing a full lab lowers the barrier of entry.

hellt commented 3 weeks ago

@tardoe I'd say on a high level I'd split the testing into the following parts:

  1. Raw input extraction. This is what you proposed in the first bullet point. The goal is to get the raw json responses per each getter that would be used in unit testing of getters
  2. local/remote unit testing of getters based on the previous step. This is where we a) take the raw json input extracted before, apply the getters logic, compare the result with a golden file.
  3. unit testing of cfg workflows (merge replace, replace, etc). I think this is where we might always require a real system, granted it is easy to spin up one locally or in actions.
  4. gh actions would use all of the above by also introducing a matrix of py versions to the mix. If we extract the raw json responses per each getter we support, there is no real need to perform the proper e2e test for all getters, although it is not a problem to add those, to ensure the real life scenario is working.
    The cfg opertaions though should always run in actions by spinning up a real device, I think.
jbemmel commented 3 weeks ago

@tardoe there is no barrier thanks to Containerlab and SR Linux. Unlike most other platforms that developers may be familiar with, SR Linux makes an easy-to-obtain free container image available. This makes it trivial to construct CI/CD pipelines such as the one illustrated in this repo - without the user having to lift a finger!

Consequently, there is no need to jump through hoops like you are suggesting. This is the best other platforms can do - but ours (yours) is better than that