iterative / dvc

🦉 Data Versioning and ML Experiments
https://dvc.org
Apache License 2.0
13.9k stars 1.19k forks source link

Dropping .dir #8884

Closed dberenbaum closed 8 months ago

dberenbaum commented 1 year ago

In #8849, we stopped serializing the directory info in the resulting .dvc/dvc.lock files for cloud versioned remotes. Can we do the same everywhere?

This would help with a bunch of existing issues:

Automatically pushing and pulling the .dir files from the remote could also solve a lot of these problems, but it seems like a worse UX. It's less transparent, harder for users to manage, and fails when users don't have access to the remote or forgot to push something.

How much do we really need the reference to the .dir file? If necessary, could we serialize that reference somewhere that's not git-tracked, like in a shadow .dvc/tmp/mydataset.dvcfile?

skshetry commented 1 year ago

I would say that we wait how cloud versioning goes before making any decisions here.

skshetry commented 1 year ago

Also, yaml parsing is many times slower than json parsing.

daavoo commented 1 year ago

Also, yaml parsing is many times slower than json parsing.

Any reason for lock files to be in YAML?

skshetry commented 1 year ago

We wanted the lock file to be human-readable too (dvc.yaml, it's other half is well in yaml), so that it's easy to manage, fix conflicts, and be more transparent to user (as it's more readable and easy to parse by humans).

dberenbaum commented 1 year ago

Possibly related: https://github.com/iterative/studio/issues/4968. Studio checks the size limit of outputs but not at a granular level. Would it be simpler for them to do this granularly if they had the individual files in dvc.lock?

dberenbaum commented 1 year ago

I would say that we wait how cloud versioning goes before making any decisions here.

What would be the harm in at least adding a flag to opt in?

efiop commented 1 year ago

I agree that we should just drop it. Especially for imports. This makes much more sense than creating .dir files which poison our cache and are complicated to manage/load.

It seems like it worked out pretty nicely for cloud versioning, so I think we are ready to switch.

dberenbaum commented 1 year ago

We have had some reports that dvc.lock parsing is slow for people with many parametrized stages, so that gives me some pause about whether this would work well for large datasets. Let's at least test with some sizeable dataset before we go too deep.

skshetry commented 1 year ago

I have tested cloud versioned CelebA dataset, it took 30 seconds to be parsed. It was a .dvc file though.

dberenbaum commented 1 year ago

@skshetry Do you know how long it takes currently with .dir and json parsing?

dberenbaum commented 1 year ago

Also, how hard would it be to use a more efficient yaml parser?

efiop commented 1 year ago

@dberenbaum yaml is pretty bad overall. But json should be significantly better though. This is to the topic of dvc.lock files and maybe either migrating to them (data/artifacts section in dvc.yaml) or maybe using json in data.dvc (json and yaml are easy to distinguish, so shouldn't be a problem, but maybe it is an opportunity to create something new).

We might be able to shave down unneeded features from yaml parser or use something that is doing that for us out of the box, but yaml is just not quite suitable for that many entries.

skshetry commented 1 year ago

We use yaml parser written in C. You can check if you have it installed using:

python  -c "import ruamel.yaml; print(ruamel.yaml.__with_libyaml__)"

Here are some benchmarks on yaml/json parsing based on celeba:

Parsing .dir file

>>> %timeit _ = json.loads(json_text)
78.5 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

>>> %timeit _ = orjson.loads(json_text)
54.7 ms ± 590 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Note that the actual parsing for .dir file is about ~1s due to iterative/dvc-data#350.

Parsing .dvc file

  1. Read-only

    >>> yaml_text = Path("dataset.dvc").read_text()
    >>> %time _ = loads_yaml(yaml_text)
    CPU times: user 30.8 s, sys: 103 ms, total: 30.9 s
    Wall time: 31 s
  2. Reading .dvc file for update (round-trip)

    >>> %time _ = loads_yaml(yaml_text, "rt")
    CPU times: user 3min 30s, sys: 486 ms, total: 3min 31s
    Wall time: 3min 31s
  3. Modifying .dvc file with no changes (also involves round-trip)

    
    >>> %%time
    with modify_yaml("dataset.dvc") as d:
       pass

CPU times: user 5min 24s, sys: 848 ms, total: 5min 25s Wall time: 5min 26s


5. Schema validation
```python
>>> %time _ = COMPILED_SINGLE_STAGE_SCHEMA(d)
CPU times: user 5.52 s, sys: 3.22 ms, total: 5.52 s
Wall time: 5.53 s

Note that we have three times more data in .dvc file than .dir. The size of .dir file is 21MB, and has two item per entry (relpath and md5). The .dvc file spans 1,418,224 lines and is 50MB in size and has more metadata fields (size) and cloud.

- relpath: list_landmarks_align_celeba.csv
  md5: e12a3125253832211b718c810b5fe92f
  size: 9932092
  cloud:
    cloud_versioned:
      etag: 91f987c43b7f37a06be88cd3e7e6dae1
      version_id: dFK7BuMO0mEDHdcqpEcyKJIbiTo3rb3p
{
    "md5": "e12a3125253832211b718c810b5fe92f",
    "relpath": "list_landmarks_align_celeba.csv"
},

YAML is supposed to be a human-readable format, so it is going to be slower, and is not meant to be abused in this way.