Closed timhall closed 6 years ago
(See VBA-Dictionary for example)
I don't personally agree with performance benchmarking to be part of a testing framework; for that you have benchmarking suites like Benchmark.js. The role of TDD/BDD is to improve the software's design and raise code quality; not to ensure your code is fast.
I do agree that it would be handy to have test execution profiling and think it should be built into the SpecDefinition class so you can determine which tests are the slowest, however I'm finding it hard to find an elegant way of adding such functionality since the actual values are executed during Spec creation.
Maybe:
With Specs.It("should add an item")
Set Dict = CreateDictionary(UseNative)
.StartTimer
Dict.Add "A", 123
.Expect(Dict("A")).ToEqual 123
.StopTimer
End With
which doesn't look that elegant... Or maybe:
With Specs.It("should add an item")
Set Dict = CreateDictionary(UseNative)
.Given(MyDict, "Add").WhenCalledWithArgs("A", 123).Expect(MyDict("A")).ToEqual 123
' Other examples could be:
' .Given("Callback").WhenCalled().Expect("...").To...
' .Given(MyObj, "Function").WhenCalled().Expect("...").To...
' and if you really need to:
' .Given("MySubroutine").WhenCalled().LogTime
End With
which is easier to read, but makes me feel icky for some reason.
@robodude666 Thanks for the comments! The second example is interesting, I'll ponder this more. I agree, this is out of scope for Excel-TDD, the goal was more to make sure that Excel-TDD provided some sort of foundation for enabling it.
It's been quite a while and I'm thinking this should be a separate library. There's no good way to get a pass or fail out of a speed test and I'd prefer to keep this library simpler.
Goal: Add performance tests to output of tests (wouldn't necessarily be pass/fail, just info).
Possible designs: