oshi / oshi6

Repository for planning/coordination of OSHI 6.0
MIT License
0 stars 2 forks source link

Unit testing and platform testing #5

Open cilki opened 6 years ago

cilki commented 6 years ago

Like every library, OSHI needs a solid set of unit tests for the api. With the Driver design pattern, a mock driver can be created that returns configurable values with configurable timings. This should be sufficient for thoroughly testing the api and caching layers.

Testing the drivers (platform testing) will require some additional infrastructure. Travis CI isn't a good fit for this kind of testing because they only have a few images. The platform tests should ideally be run on every architecture of every major OS family and distribution. So that's like 60+ (virtual) machines at least. This is likely to take a long time which is another reason why platform testing should be separate from unit testing.

Docker could reduce the burden for Linux testing, but the sandbox effect may also reduce the benefit of testing on a container. The same could also be said (to a lesser degree I think) for virtual machines.

Most projects wouldn't bother with testing on so many platforms, but I argue that this is important for OSHI considering its purpose.

dbwiddis commented 6 years ago

I’m all for platform testing but don’t know a cheap/free way to do it. “mvn test” on all my VMs works now....

cilki commented 6 years ago

One thing we have to establish is whether it is meaningful to run platform tests on VMs or containers. I think a Driver that is tested on a bare metal machine would also work on a VM, but I'm not so sure the other way around.

dbwiddis commented 6 years ago

VMs, I believe, attempt to emulate bare metal as much as they can. Can't speak for containers. But there are corner cases that don't get tested, like https://github.com/oshi/oshi/issues/620 would never have been found on any "spin up" testing environment. I don't think this is as critical, though... we don't change things often, so when they're written once and tested thoroughly enough, they should be fine.

YoshiEnVerde commented 5 years ago

Without money involved, we might have to depend on contributors for platform testing.

I wouldn't want to burden anybody with having to test all drivers in an ever growing list of supported architectures every time we do any fix or update to them.

We'll always have most WIndows and *nix architectures covered (both bare metal and virtualized), but we'll struggle with the more rare/enterprise architectures, like Solaris.

YoshiEnVerde commented 5 years ago

If we can build some easy/cheap way to run some kind of test kit for the drivers of a specific architecture, we could ask people to attach a test report to any issue raised?

I'm thinking something that might not take more than a single jar download and 10~15 minutes of execution to produce a report.

Maybe a simple piece of code that just instances OSHI, identifies the architecture, then runs through every single method available for that arch? It would then build a simple checklist of every call, with a simple passed, or a basic description of the failure. It wouldn't cover every test case, of course, but we'd at least have a general positive test of each driver in use...

cilki commented 5 years ago

I like the idea, but we want to know about more than just the failures. A driver may also return an incorrect result. To detect that, either the user enters their system information manually from a known source (hopefully not from OSHI itself) or the reporting application calculates it and compares on the fly.

The first option sounds much better because of the maintenance burden the second would create.

Seems like there should be something already out there that does this. It needs to be serverless and must produce a report that can be included in a gist or issue.

dbwiddis commented 5 years ago

A common question I have of users reporting a bug that is WMI based is to give me the output of their wmic command line equivalent. I'd like that for the "WMI" drivers, at least... in general, a lot of the "native" information we fetch has command line equivalents that are useful for debug.