Closed jonkeane closed 2 years ago
surface that in the results for now
Was the thinking here possibly adding an argument to run_benchmark
that will directly output scripts as files for inspection? Maybe partitioned by the parameters suite?
Kind of, I was thinking taking the script we create at https://github.com/ursacomputing/arrowbench/blob/d74646db4a5982b320c151725c84d294ca8d731e/R/run.R#L122-L131 and write those out to the json that is constructed kind of like we do with the console output at https://github.com/ursacomputing/arrowbench/blob/d74646db4a5982b320c151725c84d294ca8d731e/R/run.R#L289-L296
And then we would need to make a slight adjustment to conbench to take that output + save it (+ display it)
Since we are using a subprocess to execute the benchmark, we should have a pretty clean way of generating a script that someone could run to get similar results locally.
Let's surface that in the results for now + we can make an update to conbench to store + display that later