Closed ErikSchierboom closed 2 months ago
Along with pinning the base image version as suggested, I wonder if we should standardize on a specific versions for alpine, ubuntu etc. My thought was that when we upgrade, we could then use a bot to check and create issues across all repo's to perform upgrade.
Along with pinning the base image version as suggested, I wonder if we should standardize on a specific versions for alpine, ubuntu etc. My thought was that when we upgrade, we could then use a bot to check and create issues across all repo's to perform upgrade.
I don't think we will get much from that to be honest. We already use dependabot to create PRs to update dependencies to the latest version, but tracks should still feel free to pin to a specific version (e.g. because a later version breaks things).
You may want to add that the tooling should work with both .meta (when testing against the GitHub repo) and .exercism (when testing against actual solutions).
You may want to add that the tooling should work with both .meta (when testing against the GitHub repo) and .exercism (when testing against actual solutions).
That's not entirely true though. .exercism
only exists locally, when someone uses the CLI to download an exercise. Tooling won't ever see that directory though.
What's the difference between ## Timeouts (10s cut-off with 408 Request timeout) and ## Configuration (20s cut-off with timeout)?
Fixed
Is there a way to actually get results.out? Nope
Is there a way to inspect the ops_error of a run? Nope
In which cases are there ops_errors? (I assume these are different from the 408 and 413 etc) We have the following statuses:
TIMEOUT_STATUS = 408 DID_NOT_EXECUTE_STATUS = 410 EXCESSIVE_STDOUT_STATUS = 413 EXCESSIVE_OUTPUT_STATUS = 460 FAILED_TO_PREPARE_INPUT = 512 UNKNOWN_ERROR_STATUS = 513
The DID_NOT_EXECUTE_STATUS
status happens when somehow we can't start the process.
The FAILED_TO_PREPARE_INPUT
status happens when we can't setup the files/directories for the solution to be processed
I'm gonna merge this. We can tweak later!
You may want to add that the tooling should work with both .meta (when testing against the GitHub repo) and .exercism (when testing against actual solutions).
That's not entirely true though.
.exercism
only exists locally, when someone uses the CLI to download an exercise. Tooling won't ever see that directory though.
Sure, but it is something I have ran into. Making docker work, and then the same internal commands failing on a downloaded solution. I think it's worth a mention in all the related places because it's unepected.
The
DID_NOT_EXECUTE_STATUS
status happens when somehow we can't start the process. TheFAILED_TO_PREPARE_INPUT
status happens when we can't setup the files/directories for the solution to be processed
Nice, that helps.
Is there a way to receive the 500 as seen in https://forum.exercism.org/t/typescript-test-runner-ops-error/12353? Because according to these docs that should never happen.
Is there a way to receive the 500
What do you mean with "receive"?
See, read, download, explore, "get" the actual error (perhaps with stacktrace, if any).
"status": "ops_error", "message": "An unknown error occurred", "message_html": "An unknown error occurred",
This is not debuggable for us.
Not right now, no
Not right now, no
imo this (that it is not possible) is valuable to know! 🙂
I'll add it to the docs.
What's the difference between ## Timeouts (10s cut-off with 408 Request timeout) and ## Configuration (20s cut-off with timeout)? Is there a way to actually get
results.out
? Is there a way to inspect the ops_error of a run? In which cases are there ops_errors? (I assume these are different from the 408 and 413 etc)