Closed thekevinscott closed 1 year ago
Looks like it is complaining that some cache files are corrupted, deletes them and then fails because of it. Is anything strange about your setup? Could you show your dvc config --list
(make sure to censor stuff if there is anything private there). Is this a shared machine with a shared dvc cache maybe?
Thanks for the response, @efiop.
Is anything strange about your setup?
Maybe the two gdrive remotes (one service account, one not)? I'm not sure if that's standard practice for open source projects that need to enable CI.
Nothing else seems strange that I can think of.
Could you show your
dvc config --list
(I'm not sure what here is sensitive, so I'll be overly conservative)
remote.gdrive.url=gdrive://<GDRIVE_URL>
remote.gdrive.gdrive_use_service_account=false
remote.gdrive.gdrive_acknowledge_abuse=true
remote.s3.url=s3://<PATH_ON_S3>
remote.gdrive-service-account.url=gdrive://<GDRIVE_URL>
remote.gdrive-service-account.gdrive_use_service_account=true
core.remote=gdrive
core.autostage=true
Is this a shared machine with a shared dvc cache maybe?
I don't believe so. It's using whatever container Github Actions provides, which I believe is not a shared machine (or it may be shared but I assume things run in containers). Is there a way to determine whether the dvc cache would be shared more broadly?
I was able to SSH into the machine, and I manually deleted .dvc/cache
(which I think is the cache folder?) along with .dvc/tmp
and reran the command. I got the same error - looks like the same missing files, so at least it's consistent.
@thekevinscott Since this just came up in #9640, is there any chance the failures are because your Github runner ran out of space?
That's a pretty interesting thought.
df -h
:
Filesystem Size Used Avail Use% Mounted on
/dev/root 84G 69G 15G 83% /
tmpfs 3.4G 172K 3.4G 1% /dev/shm
tmpfs 1.4G 1.1M 1.4G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sdb15 105M 6.1M 99M 6% /boot/efi
/dev/sda1 14G 4.1G 9.0G 31% /mnt
tmpfs 694M 12K 694M 1% /run/user/1001
Running du -h
reports 11G
. That's certainly close to the 14G limit, but I think it still leaves some wiggle room.
@thekevinscott Do you mix 2.x and 3.x?
Original log is from gdrive, right?
. I got the same error - looks like the same missing files, so at least it's consistent.
That doesn't sound good, seems like dvc thinks that the files it is downloading from your gdrive remote are corrupted. We are currently having some problems with gdrive remotes on 3.x, that might be related (maybe it thinks that these files are using the new hash while they don't) https://github.com/iterative/dvc-gdrive/issues/29
Could you try pinning to 2.x and see if that fixes it?
Original log is pulling from gdrive, yes.
For your first question - you're asking if I'm using different DVC versions? I'm using whichever version is being installed from pip which appears to be 3.x
. I got the same error - looks like the same missing files, so at least it's consistent.
That doesn't sound good, seems like dvc thinks that the files it is downloading from your gdrive remote are corrupted. We are currently having some problems with gdrive remotes on 3.x, that might be related (maybe it thinks that these files are using the new hash while they don't) https://github.com/iterative/dvc-gdrive/issues/29
Could you try pinning to 2.x and see if that fixes it?
Sure thing, I'll give this a shot tomorrow and report back. Any particular 2.x version or is the latest good?
Hi, can you try installing pydrive2==1.16
with dvc==3
to see if that fixes the issue?
Thanks all! It is almost certainly a versioning issue.
The files were added (via dvc add
) locally with dvc==2.45.1
. Github Actions was using the latest dvc
which I believe was a 3.x
.
I ran two experiments:
dvc==2.45.1
on Github Actions (action run #3516
). dvc
pulls the models, no errors.dvc==3
with pydrive2==1.16
on Github Actions (action run #3516
). dvc
fails with the same error as above.So, presumably, because the models were added locally with a dvc==2.x
, they fail to be pulled on Github Actions because dvc==3.x
. Running with dvc==2.x
on Github Actions works, and running with dvc==3.x
fails.
I assume then the next step is to follow the instructions here on upgrading from 2.x to 3.x, and re-add the models using the correct file hashing format?
@thekevinscott, can you try with dvc==3.2.1
?
@thekevinscott, can you try with
dvc==3.2.1
?
Sure I can try that. Should I also pin pydrive2?
No, that should not be needed. dvc 3.2.1 requires pydrive2>=1.16.
I see that those files are now downloaded, but says some files are corrupted (and gets deleted as a result which most likely fails the checkout).
I see that those files are now downloaded, but says some files are corrupted (and gets deleted as a result which most likely fails the checkout).
Yes, although the error message appears slightly differently (appears to be complaining about the whole model folder now, and not just the individual corrupted model pieces)
https://github.com/thekevinscott/UpscalerJS/actions/runs/5356228232/jobs/9715490197#step:5:1
Should I hold off following the steps in the upgrade guide for troubleshooting purposes? (I assume once I upgrade, I won't be able to reproduce the issue.)
This is an open source project so I'm not under any sort of time rush to fix if it's helpful to you all to leave it in a broken state, but I assume the fix for me is following the upgrade guide to get things to 3.x.
I had this issue. On my side there was a problem with a cache folder. I deleted rm -r /Library/Caches/dvc/
(I'm on macos) otherwise try deleting rm -r /var/caches/dvc
on linux.
I suspect this occured to me while switching between branches and dvc v2 and v3.
@thekevinscott Does the latest dvc version work for you? We've changed some stuff to not share internal cache between major versions.
I'm seeing the same issue, persisting on 3.6.0 (at least on a machine that I just upgraded to that version after seeing it come up on 3.5.1), and have intermittently seen it back when all our machines were on v2 as well.
We use an S3 remote, but have a sync task set up to duplicate that entire bucket across to a read only bucket on GCP (to avoid repeatedly paying egress costs when pulling data to servers on GCP).
Root cause seems to be pulling from the secondary remote before the sync task has had a chance to run. On the first attempt dvc pull
shows:
WARNING: Some of the cache files do not exist neither locally nor on remote. Missing cache files:
But on all subsequent attempts (even after the buckets have replicated successfully) it fails with:
ERROR: failed to pull data from the cloud - Checkout failed for following targets:
Running on a freshly cloned copy of the repo seems to work fine, so I'm assuming something in the state of the original copy of the repo is getting messed up by trying to pull when the files are missing from the cloud, and then it's not successfully re-checking to see if they've appeared on subsequent runs?
Possibly related: https://github.com/iterative/dvc/issues/9651 https://github.com/iterative/dvc/issues/9730
Is it possible that once fetching/pulling data fails once, DVC never tries to pull it again?
could be related to #9826
Closing this as duplicate of https://github.com/iterative/dvc/issues/9651
This should be resolved in the latest DVC release (3.14.0 or later). In github actions CI you should not need to do anything other than updating DVC.
On non-CI machines (for @gtebbutt and @WilliamHarvey97) you will need to remove the Repo.site_cache_dir
folder (from dvc doctor
) after updating DVC and then retry your pull
rm -r /var/tmp/dvc/repo/...
dvc pull
If you still see this issue after updating and clearing the site cache feel free to re-open this ticket.
Still seeing this error for 3.14.0
:
...
2023-08-11 01:56:00,558 DEBUG: failed to create '/home/runner/work/UpscalerJS/UpscalerJS/models/maxim-experiments/models/deraining/64/group1-shard3of4.bin' from '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/4d/b6e95347a959188b928124f6184c43' - [Errno 2] No such file or directory: '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/4d/b6e95347a959188b928124f6184c43'
Traceback (most recent call last):
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 331, in transfer
_try_links(
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 273, in _try_links
_link(link, from_fs, from_path, to_fs, to_path)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 62, in _link
func(from_path, to_path)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/base.py", line 381, in link
return self.fs.link(from_info, to_info)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 160, in link
if self.size(path1) == 0:
File "/home/runner/.local/lib/python3.10/site-packages/fsspec/spec.py", line 680, in size
return self.info(path).get("size", None)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 42, in info
return self.fs.info(path)
File "/home/runner/.local/lib/python3.10/site-packages/fsspec/implementations/local.py", line 87, in info
out = os.stat(path, follow_symlinks=False)
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/4d/b6e95347a959188b928124f6184c43'
2023-08-11 01:56:00,559 DEBUG: failed to create '/home/runner/work/UpscalerJS/UpscalerJS/models/maxim-experiments/models/enhancement/64/group1-shard1of4.bin' from '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/2c/a968fedce6c3ae699d6321af343243' - [Errno 2] No such file or directory: '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/2c/a968fedce6c3ae699d6321af343243'
Traceback (most recent call last):
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 331, in transfer
_try_links(
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 273, in _try_links
_link(link, from_fs, from_path, to_fs, to_path)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 62, in _link
func(from_path, to_path)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/base.py", line 381, in link
return self.fs.link(from_info, to_info)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 160, in link
if self.size(path1) == 0:
File "/home/runner/.local/lib/python3.10/site-packages/fsspec/spec.py", line 680, in size
return self.info(path).get("size", None)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 42, in info
return self.fs.info(path)
File "/home/runner/.local/lib/python3.10/site-packages/fsspec/implementations/local.py", line 87, in info
out = os.stat(path, follow_symlinks=False)
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/2c/a968fedce6c3ae699d6321af343243'
2023-08-11 01:56:00,559 DEBUG: failed to create '/home/runner/work/UpscalerJS/UpscalerJS/models/maxim-experiments/models/enhancement/256/group1-shard1of4.bin' from '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/18/39d8b3fb0e5aa35e40d6e21508f57c' - [Errno 2] No such file or directory: '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/18/39d8b3fb0e5aa35e40d6e21508f57c'
Traceback (most recent call last):
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 331, in transfer
_try_links(
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 273, in _try_links
_link(link, from_fs, from_path, to_fs, to_path)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 62, in _link
func(from_path, to_path)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/base.py", line 381, in link
return self.fs.link(from_info, to_info)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 160, in link
if self.size(path1) == 0:
File "/home/runner/.local/lib/python3.10/site-packages/fsspec/spec.py", line 680, in size
return self.info(path).get("size", None)
File "/home/runner/.local/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 42, in info
return self.fs.info(path)
File "/home/runner/.local/lib/python3.10/site-packages/fsspec/implementations/local.py", line 87, in info
out = os.stat(path, follow_symlinks=False)
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/UpscalerJS/UpscalerJS/dvc-cache-dir-jun-23/18/39d8b3fb0e5aa35e40d6e21508f57c'
2023-08-11 01:56:01,373 DEBUG: Removing '/home/runner/work/UpscalerJS/UpscalerJS/models/maxim-experiments/models'
A models/esrgan-thick/models/
A models/maxim-denoising/models/
A models/maxim-retouching/models/
A models/maxim-deblurring/models/
A models/esrgan-slim/models/
A models/maxim-dehazing-outdoor/models/
A models/default-model/models/
A models/maxim-deraining/models/
A models/esrgan-medium/models/
A models/maxim-enhancement/models/
A models/pixel-upsampler/models/
A models/esrgan-legacy/models/
A models/maxim-dehazing-indoor/models/
A models/esrgan-experiments/models/
14 files added and 2114 files fetched
2023-08-11 01:56:01,754 ERROR: failed to pull data from the cloud - Checkout failed for following targets:
models/maxim-experiments/models
Is your cache up to date?
<https://error.dvc.org/missing-files>
Traceback (most recent call last):
File "/home/runner/.local/lib/python3.10/site-packages/dvc/commands/data_sync.py", line 31, in run
stats = self.repo.pull(
File "/home/runner/.local/lib/python3.10/site-packages/dvc/repo/__init__.py", line 64, in wrapper
return f(repo, *args, **kwargs)
File "/home/runner/.local/lib/python3.10/site-packages/dvc/repo/pull.py", line 43, in pull
stats = self.checkout(
File "/home/runner/.local/lib/python3.10/site-packages/dvc/repo/__init__.py", line 64, in wrapper
return f(repo, *args, **kwargs)
File "/home/runner/.local/lib/python3.10/site-packages/dvc/repo/checkout.py", line 208, in checkout
raise CheckoutError([relpath(out_path) for out_path in failed], stats)
dvc.exceptions.CheckoutError: Checkout failed for following targets:
models/maxim-experiments/models
Is your cache up to date?
<https://error.dvc.org/missing-files>
2023-08-11 01:56:01,767 DEBUG: Analytics is enabled.
2023-08-11 01:56:01,808 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', '/tmp/tmpyoi_he4b']'
2023-08-11 01:56:01,812 DEBUG: Spawned '['daemon', '-q', 'analytics', '/tmp/tmpyoi_he4b']'
Error: Process completed with exit code 1.
Pinning to 2.45.1
works fine and is what I've been doing to get around the bug. On my todo list was to upgrade my local dvc
to 3 which I was assuming (hoping?) would resolve the version mismatch issues.
Don't necessarily want to keep this ticket open as I've found a workaround and will be upgrading locally soon, but I'm happy to reopen it if you'd like to debug it further.
I realize I forgot to clear the cache in CI - let me do that and I'll report back with the results
Same issue as above with cache cleared for 3.14.0
.
https://github.com/thekevinscott/UpscalerJS/actions/runs/5831901238/job/15816243365?pr=842
@thekevinscott Seems related to https://github.com/iterative/dvc/issues/9733 . Could you try dvc 3.15.0, please?
3.15.0
works like a charm! No errors.
Bug Report
Description
On Github Actions, I'm receiving the following error message during a
dvc pull
(the full log is here):It appears that a subset of files fails to be pulled.
Things I've tried:
dvc pull
there without any issues. All models get pulled successfully.Reproduce
I'm not quite sure how to reproduce, as this is only happening on Github Actions. Here is a sample run where it happens. I cannot reproduce locally.
Environment information
Output of
dvc doctor
:I'd be happy to provide any other information for help in debugging this. I'm not sure of the best way to troubleshoot as the issue only seems to appear in Github Actions.
UPDATE
One additional thing that might be useful is, there are three remotes associated with this repo:
gdrive
(A regular Google Drive)gdrive-service-account
(The same account as above, but set up to work with a service account)s3
(Amazon S3, mirrored)The reason for the two gdrive remotes is so that users can clone the repo and easily pull models (it's an open source library and
gdrive
is the default remote), but also to enable CI integration (which afaik requires a service account).That said, I've confirmed locally that pulling from the service account works successfully. Just not in the Github Actions session for some reason.