Closed pocketken closed 2 years ago
Yeah, that sounds correct to me. Thanks!
By the way, right now the whole way pipes work in dax is not exactly the same as in a shell. In https://github.com/denoland/deno_task_shell for example, it's implemented using real operating system level pipes (using https://crates.io/crates/os_pipe), but there's no such primitive available in Deno (ex. no pipe
on Linux or anonymous pipe on Windows). So for example, doing something like command1 && command2
and having both commands read from the provided stdin won't work. It might eventually be added to Deno, but for now we have this limitation.
By the way, right now the whole way pipes work in dax is not exactly the same as in a shell. In https://github.com/denoland/deno_task_shell for example, it's implemented using real operating system level pipes (using https://crates.io/crates/os_pipe), but there's no such primitive available in Deno (ex. no
pipe
on Linux or anonymous pipe on Windows). So for example, doing something likecommand1 && command2
and having both commands read from the provided stdin won't work. It might eventually be added to Deno, but for now we have this limitation.
Yeah I gathered as much as soon as I started to look through the code for deno_task_shell
and (briefly) gazed down the "threading in Web Assembly" rabbit hole...
Might have introduced a leak, now that I think about it:
} finally {
completeController.abort();
context.signal.removeEventListener("abort", abortListener);
p.close();
// pre-65d20b9: p.stdin?.close();
p.stdout?.close();
p.stderr?.close();
}
async function writeStdin(stdin: ShellPipeReader, p: Deno.Process, signal: AbortSignal) {
if (typeof stdin === "string") {
return;
}
await pipeReaderToWriter(stdin, p.stdin!, signal);
p.stdin!.close();
}
Previously p.stdin
was always closed in the finally
block. After this change if typeof stdin === "string"
, p.stdin
doesn't get closed.
I'll throw together a follow-up PR.
In my current project, we have a few instances where we are using pipes in order to feed data to certain commands, e.g.
echo something | kubectl apply -f -
for applying kubernetes objects. AsPipeSequence
is currently unsupported, I have tried converting them over to use the.stdin()
method on the$
helper. However I have noticed when doing so that my processes seem to hang -- I can see the subprocess fire up, but it never exits.In further debugging the issue I was able to determine that the
stdin
stream was being written out OK, however, it seems that the command consuming the stream (kubectl
in my case) was waiting for some sort of flush operation or EOF. e.g.:results in:
and
kubectl
will just sit there. However, if I quickly hack atexecuteCommandArgs
and move thestdin.close()
for the subprocess from thefinally
block into the actualwriteStdin
function, so that the stream is closed once the content has been written,kubectl
completes successfully:results in:
Which is more in line with what I would expect to see --
kubectl
wonking viastderr
in this case, or successfully completing if I were feeding it real junk.I can submit a PR for the above change easily enough (tests will pass with the change), but I wanted to double check first to make sure I wasn't missing something obvious with how to use this. Its been a long week...
Thanks!