Performance is one of the intended features of this library, so this should also be integrated as part of the testing cycle on Travis. However, this is not so trivial because the expected performance varies from (virtual) machine to (virtual) machine. If your CI machine is always the same, you just store previous results, but we can't really guarantee that.
Ideas?
[ ] use an independent benchmark code as a baseline
Performance is one of the intended features of this library, so this should also be integrated as part of the testing cycle on Travis. However, this is not so trivial because the expected performance varies from (virtual) machine to (virtual) machine. If your CI machine is always the same, you just store previous results, but we can't really guarantee that.
Ideas?