Azure / azure-openai-benchmark

Azure OpenAI benchmarking tool
MIT License
130 stars 54 forks source link

Add saving of logs to disk & combining into CSV #13

Open michaeltremeer opened 11 months ago

michaeltremeer commented 11 months ago

The code adds the following:

michaeltremeer commented 11 months ago

Good points in the comments, and the combine_logs functionality is a stretch for the core functionality of the repo - agreed. The only note I would say though (inspired by my own use): With shell redirection, you do miss any warnings or errors that occurred during the run, as well as any run parameters which make it easier to tell what the config was for each run (useful when you're analysing a number of permutations of shape, model and concurrency). You also miss the time the run finished or was terminated (which means the post-test period where requests are still draining can impact the numbers).

If you think it's valid and worth a change, I can resubmit just the first set of changes for saving a single run's logs, otherwise I can just submit the updated README for clarity.

technicianted commented 11 months ago

you do miss any warnings or errors that occurred during the run You can redirect both stdout and stderr to the same file if needed:

$ python -m benchmark.bench load ... > my/output/dir/full_output.log 2>&1

or separate files:

$ python -m benchmark.bench load ... > my/output/dir/stats_output.log 2> my/output/dir/logs.log

Does that cover the use-case?

as well as any run parameters which make it easier to tell what the config was for each run

Yes I agree this is super useful. Let's add that in a separate PR.

I can resubmit just the first set of changes for saving a single run's logs, otherwise I can just submit the updated README for clarity.

Yes let's break them down. Generally many smaller PRs is always better :)

Thanks!