iterative / cml

♾️ CML - Continuous Machine Learning | CI/CD for ML
http://cml.dev
Apache License 2.0
4.02k stars 341 forks source link

tutorial: NFS/volumes #561

Closed DavidGOrtega closed 2 years ago

DavidGOrtega commented 3 years ago

Client-side sibling of https://github.com/iterative/terraform-provider-iterative/issues/89

Until we have ~https://github.com/iterative/terraform-provider-iterative/issues/107~ https://github.com/iterative/terraform-provider-iterative/issues/123 and https://github.com/iterative/terraform-provider-iterative/issues/89 we could offer some recipes that we have crafted as proposed solutions to some users in the Discord channel. We would need to create those simple scenarios in the docs as a FAQ?

DavidGOrtega commented 3 years ago

I think that the best idea could be to put everything under collapsible blocks under a FAQ in the readme.

0x2b3bfa0 commented 3 years ago

https://dvc.org/doc/user-guide/managing-external-data#setting-up-an-external-cache?

DavidGOrtega commented 3 years ago

https://dvc.org/doc/user-guide/managing-external-data#setting-up-an-external-cache?

@0x2b3bfa0

DVC requires that the project's cache is configured in the same external location as the data that will be tracked (external outputs).

We need the storage first? πŸ€” Probably @casperdcl can help here

casperdcl commented 3 years ago

could still have CI cache .dvc/cache, if that's what you mean?

DavidGOrtega commented 3 years ago

@casperdcl Im not sure. What we originally did or hack in Discord was to attach a volume or an NFS storage. I was guessing that @0x2b3bfa0 was actually referring to that.

0x2b3bfa0 commented 3 years ago

It should be technically feasible with something like this:

sudo apt install nfs-common
sudo mount -t nfs EFS_IP_ADDRESS:/ MOUNTPOINT

(From Discord)

Using NFS storage for a cache might not be an optimal solution due to latency and file transfer times. AWS EFS is fast, but not that fast.

0x2b3bfa0 commented 3 years ago

DVC already supports cache over a variety of network transports and, if we plan to offer alternative solutions, they should be as local and as fast as possible. Probably block-based, mounted to the runner machine itself, and without any control over their lifecycle: exactly iterative/terraform-provider-iterative#89.

In the meanwhile, we can mention https://dvc.org/doc/user-guide/managing-external-data#setting-up-an-external-cache and, perhaps, tell users how to use an external NFS shared cache as per the hack above.

0x2b3bfa0 commented 3 years ago

Security requires a bit more attention than a list of officially recommended workarounds: see https://github.com/iterative/terraform-provider-iterative/issues/125

casperdcl commented 3 years ago

DVC already supports cache over a variety of network transports

are you talking about https://dvc.org/doc/user-guide/managing-external-data#examples? If that's the case, just to clarify... remote caches are only useful for external remote outputs (dvc exp run --external) which isn't a use case we're discussing atm afaik.

So for our purposes DVC does not support cache over network

shcheklein commented 3 years ago

So for our purposes DVC does not support cache over network

We would need to clarify "over network" here. I think we are on the same page, but to be precise - DVC cache supports anything that can be mounted as a volume and symlinked/copied from it into workspace. It means that it can be NAS (and we had teams with 70TB cache organized this way). But we can't do something like dvc cache dir s3://dvc/storage at the moment. This is what remotes are for now (though as far as I understand something like dvc cache dir s3://dvc/storage should be easy to implement).

0x2b3bfa0 commented 3 years ago

TL;DR

There is so much confusion on this point that even we often don't get it https://github.com/iterative/dvc.org/issues/520#issuecomment-855220404

TouchΓ©, @casperdcl! πŸ˜„ It looks like my cursory investigation wasn't enough to emit an educated opinion on this topic. πŸ™ˆ

What I recommended on Discord β€” see https://github.com/iterative/cml/issues/561#issuecomment-852292788 for more information β€” was just a way of moving the local cache to a NFS share, like the ones provided by AWS, Azure or GCP. It's slow for what users would expect of a cache [citatation needed] but, at least, serves as an intermediate storage to avoid querying the main DVC remote every time CML launches a new instance.

After reviewing the documentation, I noticed that --external would use data in situ from a remote storage other than the DVC remote, without pushing data to the latter under any circumstances. I guess that this is not what we want: the main DVC remote should always be the single source of truth for data, and caches, regradless of the implementation details, should only be a convenience storage to accelerate and optimize data transfer operations.

Our reusable cache should be faster and probably cheaper than the remote, both on sequential and random access; otherwise, it would be better to query the remote directly. It may sound like a lapalissade, but it's an important point to consider when choosing the storage type.

shcheklein commented 3 years ago

Does the DVC cache support read/write access from several instances at the same time? If not, the shared cache concept on iterative/terraform-provider-iterative#89 would not be feasible.

yes, it supports multiple clients

Cache should not hold any data that isn't present on the DVC remote: it should be just a faster and potentially cheaper place to store a reasonably updated copy of the data.

it's a bit more nuanced. There are teams that don't use remotes at all :) But otherwise you are right.

0x2b3bfa0 commented 3 years ago

Thank you very much for shedding a bit of light on this, @shcheklein! πŸ™πŸΌ

yes, it supports multiple clients

Awesome! πŸŽ‰

it's a bit more nuanced. There are teams that don't use remotes at all

Makes sense after thinking on the details, though calling it cache in the external data use case might not be too intuitive, even if the working principle is the same. πŸ™ƒ Thanks for the clarification!

0x2b3bfa0 commented 3 years ago

as far as I understand something like dvc cache dir s3://dvc/storage should be easy to implement

This is exactly what I was looking for! We don't need it for NFS, as it can be regarded as any other mounted filesystem, but it would be a great addition for other storage systems that can't be mounted at the system level in any meaningful way, like S3.

Before proposing the implementation of such a feature, I would also like to point out that we could resort to somewhat mountable filesystems like HDFS or Lustre, which seem a good fit for this kind of use case:

Many workloads such as machine learning, high performance computing (HPC), video rendering, and financial simulations depend on compute instances accessing the same set of data through high-performance shared storage (AWS FSx)

The pity is that this kind of solution is not supported on every cloud without a healthy dose of contrived manual deployments. As users will need to configure it by themselves, we probably need to consider availability and ease of use as part of the main comparison points.

casperdcl commented 3 years ago

Ok so action point:

@0x2b3bfa0 would be great if you could put together a performant NFS/volume example repo targeting use case of extremely large dependencies (>1TB, where users won't want to dvc get/curl/aws cp etc). This is only for proof of concept rather than targetting every single possible user config/setup.

The checkpoint cache stuff is a different issue (#390).

0x2b3bfa0 commented 3 years ago

performant NFS

May I add it as a relevant example for the Wikipedia page of oxymoron? 😈 I'll follow up later with a comparison of all the solutions we've talked about and some examples.

0x2b3bfa0 commented 3 years ago

Storage requirements

  1. As fast as the average DVC remote, at least.
  2. Able to persist data between consecutive runs.
  3. Accessible by several machines at the same time.
  4. Available on all the public clouds we plan to support.

General storage types

Ephemeral Block Object File
Can offer transfer speeds comparable to hard disks? βœ… βœ… 🟠 πŸ”΄
Can be reused from different machines, one at a time? 🚫 βœ… βœ… βœ…
Can be accessed by many machines at the same time? 🚫 🚫 βœ… βœ…
Can be mounted at the CI/CD level? 🚫 🚫 🟠 🟠

We[citation needed] have been using the word volumes since the beginning of this issue β€” not to mention iterative/terraform-provider-iterative#89 β€” but the concept of volumes is tightly related to block-based storage. Unfortunately, block-based storage can't be accessed by several machines at the same time, so our only possible choices are object-based storage and some kinds of distributed file-based storage.

Specific storage types

Object-based

AWS Azure GCP
Name S3 Blob Storage Cloud Storage
Alleged speed 12 GB/s 12.5 GB/s N/A
Mountable with βœ… s3fs-fuse βœ… azure-storage-fuse βœ… gcsfuse
Transit encryption βœ… HTTPS βœ… TLS 1.2 βœ… TLS 1.3
Authentication Token Token Service account

File-based

AWS Azure GCP
Name EFS Files Filestore
Alleged speed* 50 MB/s 60 MB/s 100 MB/s
Mountable with βœ… NFSv4 βœ… NFSv4 βœ… NFSv3
Transit encryption† βœ… TLS 1.2 🚫 NO 🚫 N/A
Authentication† 🚫 NO 🚫 N/A 🚫 NO

Others

While other distributed filesystems like HDFS or Lustre might be a good option in some scenarios, they haven't been widely adopted by the popular public clouds. AWS FSx looks really good, but isn't portable.

Recommended reads


* Alleged base speed; it gets better if you store > 10 terabytes in some providers or pay additional burst speed credits. † Not that important if the NFS service can only be accessed through the local network, but that would require ClickOps.

0x2b3bfa0 commented 3 years ago

πŸ”” @iterative/cml, I'll write some FUSE examples as soon as I make sure that nobody has a personal preference for NFS. βš”οΈ

0x2b3bfa0 commented 3 years ago
0x2b3bfa0 commented 3 years ago

Note: requires additional discussion and, probably, will be merged with https://github.com/iterative/terraform-provider-iterative/issues/89

casperdcl commented 3 years ago

related: https://github.com/iterative/dvc.org/pull/2587

0x2b3bfa0 commented 3 years ago

Mounting FUSE devices inside a container is still not possible without the SYS_ADMIN capability and some extra privileges:

Linux already supports unprivileged userspace mounts as per https://github.com/torvalds/linux/commit/4ad769f3c346 and we're just missing support from container runtimes.

0x2b3bfa0 commented 3 years ago

In the meantime, we can mount the filesystem at the instance level (on machines) or with additional privileges (on containers), but this could have a negative impact on container isolation.

0x2b3bfa0 commented 3 years ago

The question is: does attaching object-based or file-based storage make any sense if we take into account the limitations exposed on iterative/dvc.org#2587?

Attaching this kind of storage would be approximately as practical as pulling/pushing data with dvc to any of the supported remotes, and the only difference would be that data manipulation would be done in situ without requiring a local scratchpad.

0x2b3bfa0 commented 2 years ago

Real–life performance is between 30 and 50 MiB/s with both rclone and FUSE on all the supported cloud providers. Might be network–bound, though.

0x2b3bfa0 commented 2 years ago

Getting peaks of ~500 MiB/s (yes, the ISO 80000-13 ones) on S3 with a beefy c5a.24xlarge instance after fine–tuning rclone settings.

Closed with https://github.com/iterative/terraform-provider-iterative/pull/237

0x2b3bfa0 commented 1 year ago

https://github.com/iterative/cml-playground/pull/247