polydawn / repeatr

Repeatr: Reproducible, hermetic Computation. Provision containers from Content-Addressable snapshots; run using familiar containers (e.g. runc); store outputs in Content-Addressable form too! JSON API; connect your own pipelines! (Or, use github.com/polydawn/stellar for pipelines!)
https://repeatr.io
Apache License 2.0
68 stars 5 forks source link

Local file silo conflict #71

Closed TripleDogDare closed 6 years ago

TripleDogDare commented 8 years ago

Using same local file path for different types of transmat will cause errors. e.g. A formula outputs a dir type and then tries to output the same item as a tar type to the same wares directory.

sudo $(which repeatr) run gogs.frm >/dev/null
INFO[03-23|23:55:58] Job queued                               JobID=aemfo9ty-53b3oq2d-et8s3srp
INFO[03-23|23:55:58] Starting materialize for tar hash=H7NK-Nsm362N9c9hJ3PvQJWRIwJp6mUQ1zxBX5R0qCzUQcR3jhgtLzO2PP1BdLe8 JobID=aemfo9ty-53b3oq2d-et8s3srp
INFO[03-23|23:55:58] Finished materialize for tar hash=H7NK-Nsm362N9c9hJ3PvQJWRIwJp6mUQ1zxBX5R0qCzUQcR3jhgtLzO2PP1BdLe8 JobID=aemfo9ty-53b3oq2d-et8s3srp
INFO[03-23|23:55:58] Starting materialize for tar hash=RoOPPC6llRYEMRE0F6U2F95a2_0i_YNTfPEZ7PEToFmfEkz7T6Gp4bJYjHup-2Bb JobID=aemfo9ty-53b3oq2d-et8s3srp
WARN[03-23|23:55:58] Errored during materialize for tar hash=RoOPPC6llRYEMRE0F6U2F95a2_0i_YNTfPEZ7PEToFmfEkz7T6Gp4bJYjHup-2Bb JobID=aemfo9ty-53b3oq2d-et8s3srp error="WarehouseIOError: could not start decompressing: read wares/RoOPPC6llRYEMRE0F6U2F95a2_0i_YNTfPEZ7PEToFmfEkz7T6Gp4bJYjHup-2Bb: is a directory"
INFO[03-23|23:55:58] Starting materialize for tar hash=aLMH4qK1EdlPDavdhErOs0BPxqO0i6lUaeRE4DuUmnNMxhHtF56gkoeSulvwWNqT JobID=aemfo9ty-53b3oq2d-et8s3srp
INFO[03-23|23:55:58] Finished materialize for tar hash=aLMH4qK1EdlPDavdhErOs0BPxqO0i6lUaeRE4DuUmnNMxhHtF56gkoeSulvwWNqT JobID=aemfo9ty-53b3oq2d-et8s3srp
INFO[03-23|23:55:58] Job starting                             JobID=aemfo9ty-53b3oq2d-et8s3srp
EROR[03-23|23:55:58] Job execution errored                    JobID=aemfo9ty-53b3oq2d-et8s3srp error="WarehouseIOError: could not start decompressing: read wares/RoOPPC6llRYEMRE0F6U2F95a2_0i_YNTfPEZ7PEToFmfEkz7T6Gp4bJYjHup-2Bb: is a directory"
Exit: job execution errored: WarehouseIOError: could not start decompressing: read wares/RoOPPC6llRYEMRE0F6U2F95a2_0i_YNTfPEZ7PEToFmfEkz7T6Gp4bJYjHup-2Bb: is a directory
$ ls -hla wares | awk {'print $1" "$5"\t"$6" "$7" "$8" "$9'}
total      
drwxr-xr-x 4.0K Mar 23 23:52 .
drwxr-xr-x 4.0K Mar 23 23:42 ..
drwxr-xr-x 4.0K Mar 23 23:51 H8mtN--cw-fJWoQYhKpxIeKIZv3abTsBIBko00rnQjh8JaL6aV1a2eVZD-387INE
drwxr-xr-x 4.0K Mar 23 23:43 pRbQRKiqq8PWwHXfbkInDXb2DrE7Cw-E9QuQaBLJ6coo8-n9QOARbDpkm8mS2YiN
drwxr-xr-x 4.0K Mar 23 23:44 RoOPPC6llRYEMRE0F6U2F95a2_0i_YNTfPEZ7PEToFmfEkz7T6Gp4bJYjHup-2Bb
-rw-r--r-- 432M Mar 23 23:55 .tmp.upload.aemff4r1-bqcod6yy-p8ydvcw4
warpfork commented 7 years ago

This is funny!

What should the behavior be in this case? I can't think of anything except for making sure the error is reasonable.

warpfork commented 6 years ago

Bulk issue close. We're closing in on the cut-over to the "r200" branch, a major version jump and substantial rewrite of core components, and most of this error handling stuff has changed (hopefully for the better!) so much that this issue has just been left in the dust.

If the issue still exists in the new versions, of course please open an issue with any new symptoms :)

(I don't think this issue exists in the new system. And if it does, the bug will now actually be in https://github.com/polydawn/rio , since that system is now split out.)