Closed rijnhard closed 3 years ago
@rijnhard I guess that you are you using tmp-promise?
See https://github.com/benjamingr/tmp-promise/blob/3154ff6a7cf59d21bb8047814b4c10c978417321/index.js#L20
where @benjamingr uses await temporary.cleanup()
instead of just calling it.
Perhaps this is the solution to your problem?
I think this is unrelated (I'm not sure why I await
it btw). I also don't think OP is using tmp-promise but is promisifying themselves.
I'll put on my "node streams maintainer" hat and not my "tmp-promise" maintainer hat for a second.
streams.pipeline
takes a (non optional) callback as its last argument, you are passing a stream to it. I don't think that's supposed to work. I'm assuming you promisified it somehow? (if that's ppipeline
above you id not use it below - nor did you await its result).
If you look at our docs they say:
stream.pipeline() will call stream.destroy(err) on all streams except:
- Readable streams which have emitted 'end' or 'close'.
- Writable streams which have emitted 'finish' or 'close'.
As the stream was already destroyed (and thus the file was closed) - calling .cleanup
again on an already disposed file is most likely an error (you can just remove it - pipeline will take care of it for you).
I think this (erroring) is probably the correct behaviour.
To be clear: this is very easy to "fix" in the tmp
code - but I believe the current behaviour is correct.
Correct
I am not using tmp-promise
, and I am promisifying it myself.
You are also correct in that pipeline is already closing the stream.
As a rule of thumb, I try it explicitly cleanup where possible especially when catching an error, because that's how memory leaks happen.
I do tend to agree with you in that tmp
is not doing anything wrong per say, but...
It would be really nice if cleanup just didn't error if being called again, or if I could at least pass a flag to the factory function to tell it to not throw an error if the file was already cleaned up.
maybe for debate's sake I'll ask: What purpose does throwing an error serve when attempting to cleanup a file that is already cleaned up?
It covers a programmer bug - imagine you had two files and instead of cleaning up them you clean up the first file twice. If that's an error you see the big quickly and fix it. If it's not an error you get a leak.
It's very easy to wrap it with a .catch but I think the current default is probably preferable
@benjamingr the catch is that its literally impossible to catch in userland.
No amount of try/catch will stop an unhandled error from being generated (at least no amount my sacrificing goats at the alter of tmp
managed to suppress this).
So even though I have a bit of a different view, I do agree that that in your scenario of two files it could mask an error. That said, I'm now more convinced that I should either be able to wrap it in a try-catch or have a way of suppressing the error.
It doesn't suppress the error if you just .catch on the promise returned? 🤔
@rijnhard have you tried the option detachDescriptor?
Closing as this can easily be achieved by the detachDescriptor
option.
@benjamingr thank you for your invaluable contribution, I really appreciated this!
Happy to help :]
@benjamingr I invited you to this in order for @rijnhard to realize that there is the detachDescriptor
option, which seems to be lost on @rijnhard. Maybe we should work on the existing documentation to make it more clear.
Operating System
NodeJS Version
Tmp Version
0.2.1
Expected Behavior
safely handle cleanup for streams even if the file descriptor is already closed.
Experienced Behavior