Metric comparisons are in beta. Please report bugs under the issues tab.
To create this report yourself, grab the metrics artifact from the CI run, extract them, and invoke python3 -m openlane.common.metrics compare-remote current --branch main --table-verbosity ALL --table-out ./tables_all.md.
No changes to critical metrics were detected in analyzed designs.
Metric comparisons are in beta. Please report bugs under the issues tab.
Full tables ► https://gist.github.com/openlane-bot/3561a724250adbfd2fb400dceb63f4f9