There was a little bit of parallel execution in Doctor within individual checks (e.g., deep-dependency searching), but the steps still ran sequentially. The longest individual check typically takes around 6-7 seconds on my test package. There's about three checks that take 5-7 seconds, resulting in about 15 seconds of execution (the others are near instant), not counting start-up. If we ran these in parallel, we could bring that whole sequence down to ~7 seconds.
How
Refactored into a parallel runner that separated output into two sections:
summary of each check that appears as it is completed to show progress (e.g., name of test, pass/ fail)
detailed issues/ advice at the end only for checks that found issues
The summary is technically non-deterministic, in that each check could appear at a different position in the sequence, depending on how fast each one executed. The reason I did this was so EAS Build log timestamps would still be useful for indicating if a particular check took too long. However, I did add individual test timings that can be viewed via EXPO_DEBUG=1 after an interaction with a Discord user. I could also see just making these timestamps the default and always showing the check results sequentially.
If there's an unexpected error (e.g., bug in the check, network failure fetching dependencies), those errors appear inline with the summary. I wanted to continue to treat these differently from actual cases where the check didn't pass (e.g., the check actually ran, but just found an issue, not that the check bailed out and threw an exception), and keep the error in close proximity to the check name.
Test Plan
[x] Compared new output with old.
[x] Ran with CI=1 to simulate EAS Build
[x] Ran with network off to see network errors inline with test result summary
[x] Compared with a stopwatch (7-8 seconds vs. 16-20, not counting npx start-up)
Why
There was a little bit of parallel execution in Doctor within individual checks (e.g., deep-dependency searching), but the steps still ran sequentially. The longest individual check typically takes around 6-7 seconds on my test package. There's about three checks that take 5-7 seconds, resulting in about 15 seconds of execution (the others are near instant), not counting start-up. If we ran these in parallel, we could bring that whole sequence down to ~7 seconds.
How
Refactored into a parallel runner that separated output into two sections:
The summary is technically non-deterministic, in that each check could appear at a different position in the sequence, depending on how fast each one executed. The reason I did this was so EAS Build log timestamps would still be useful for indicating if a particular check took too long. However, I did add individual test timings that can be viewed via
EXPO_DEBUG=1
after an interaction with a Discord user. I could also see just making these timestamps the default and always showing the check results sequentially.If there's an unexpected error (e.g., bug in the check, network failure fetching dependencies), those errors appear inline with the summary. I wanted to continue to treat these differently from actual cases where the check didn't pass (e.g., the check actually ran, but just found an issue, not that the check bailed out and threw an exception), and keep the error in close proximity to the check name.
Test Plan
Old output:
New output: