Until now, we only pushed benchmarks for optimized code to the github pages. That was the wrong decision, as we are now curious to understand (a) where mlkem-native's pure-C performance stands in comparison to the reference implementation, and (b) how it has evolved over time, esp. in light of restructurings we made for the CBMC proofs.
Ultimately, we should retroactively conduct & push non-opt benchmarks. For now, this commit merely enables pushing benchmarking results to GH pages for non-optimized code.
Care needs to be taken to not mess up with existing optimized benchmarks by changing their name. Thus, we keep the previous names (which matched the target name, e.g. "Graviton 3"), but append " (no-opt)" in case we benchmarked the C backend.
Until now, we only pushed benchmarks for optimized code to the github pages. That was the wrong decision, as we are now curious to understand (a) where mlkem-native's pure-C performance stands in comparison to the reference implementation, and (b) how it has evolved over time, esp. in light of restructurings we made for the CBMC proofs.
Ultimately, we should retroactively conduct & push non-opt benchmarks. For now, this commit merely enables pushing benchmarking results to GH pages for non-optimized code.
Care needs to be taken to not mess up with existing optimized benchmarks by changing their name. Thus, we keep the previous names (which matched the target name, e.g. "Graviton 3"), but append " (no-opt)" in case we benchmarked the C backend.