Closed DavidGOrtega closed 3 years ago
I think that the best idea could be to put everything under collapsible blocks under a FAQ in the readme.
https://dvc.org/doc/user-guide/managing-external-data#setting-up-an-external-cache?
@0x2b3bfa0
DVC requires that the project's cache is configured in the same external location as the data that will be tracked (external outputs).
We need the storage first? π€ Probably @casperdcl can help here
could still have CI cache .dvc/cache
, if that's what you mean?
@casperdcl Im not sure. What we originally did or hack in Discord was to attach a volume or an NFS storage. I was guessing that @0x2b3bfa0 was actually referring to that.
It should be technically feasible with something like this:
sudo apt install nfs-common sudo mount -t nfs EFS_IP_ADDRESS:/ MOUNTPOINT
Using NFS storage for a cache might not be an optimal solution due to latency and file transfer times. AWS EFS is fast, but not that fast.
DVC already supports cache over a variety of network transports and, if we plan to offer alternative solutions, they should be as local and as fast as possible. Probably block-based, mounted to the runner machine itself, and without any control over their lifecycle: exactly iterative/terraform-provider-iterative#89.
In the meanwhile, we can mention https://dvc.org/doc/user-guide/managing-external-data#setting-up-an-external-cache and, perhaps, tell users how to use an external NFS shared cache as per the hack above.
Security requires a bit more attention than a list of officially recommended workarounds: see https://github.com/iterative/terraform-provider-iterative/issues/125
DVC already supports cache over a variety of network transports
are you talking about https://dvc.org/doc/user-guide/managing-external-data#examples? If that's the case, just to clarify... remote caches are only useful for external remote outputs (dvc exp run --external
) which isn't a use case we're discussing atm afaik.
So for our purposes DVC does not support cache over network
So for our purposes DVC does not support cache over network
We would need to clarify "over network" here. I think we are on the same page, but to be precise - DVC cache supports anything that can be mounted as a volume and symlinked/copied from it into workspace. It means that it can be NAS (and we had teams with 70TB cache organized this way). But we can't do something like dvc cache dir s3://dvc/storage
at the moment. This is what remotes are for now (though as far as I understand something like dvc cache dir s3://dvc/storage
should be easy to implement).
There is so much confusion on this point that even we often don't get it https://github.com/iterative/dvc.org/issues/520#issuecomment-855220404
TouchΓ©, @casperdcl! π It looks like my cursory investigation wasn't enough to emit an educated opinion on this topic. π
What I recommended on Discord β see https://github.com/iterative/cml/issues/561#issuecomment-852292788 for more information β was just a way of moving the local cache to a NFS share, like the ones provided by AWS, Azure or GCP. It's slow for what users would expect of a cache [citatation needed] but, at least, serves as an intermediate storage to avoid querying the main DVC remote every time CML launches a new instance.
After reviewing the documentation, I noticed that --external
would use data in situ from a remote storage other than the DVC remote, without pushing data to the latter under any circumstances. I guess that this is not what we want: the main DVC remote should always be the single source of truth for data, and caches, regradless of the implementation details, should only be a convenience storage to accelerate and optimize data transfer operations.
Our reusable cache should be faster and probably cheaper than the remote, both on sequential and random access; otherwise, it would be better to query the remote directly. It may sound like a lapalissade, but it's an important point to consider when choosing the storage type.
Does the DVC cache support read/write access from several instances at the same time? If not, the shared cache concept on iterative/terraform-provider-iterative#89 would not be feasible.
yes, it supports multiple clients
Cache should not hold any data that isn't present on the DVC remote: it should be just a faster and potentially cheaper place to store a reasonably updated copy of the data.
it's a bit more nuanced. There are teams that don't use remotes at all :) But otherwise you are right.
Thank you very much for shedding a bit of light on this, @shcheklein! ππΌ
yes, it supports multiple clients
Awesome! π
it's a bit more nuanced. There are teams that don't use remotes at all
Makes sense after thinking on the details, though calling it cache in the external data use case might not be too intuitive, even if the working principle is the same. π Thanks for the clarification!
as far as I understand something like
dvc cache dir s3://dvc/storage
should be easy to implement
This is exactly what I was looking for! We don't need it for NFS, as it can be regarded as any other mounted filesystem, but it would be a great addition for other storage systems that can't be mounted at the system level in any meaningful way, like S3.
Before proposing the implementation of such a feature, I would also like to point out that we could resort to somewhat mountable filesystems like HDFS or Lustre, which seem a good fit for this kind of use case:
The pity is that this kind of solution is not supported on every cloud without a healthy dose of contrived manual deployments. As users will need to configure it by themselves, we probably need to consider availability and ease of use as part of the main comparison points.
Ok so action point:
@0x2b3bfa0 would be great if you could put together a performant NFS/volume example repo targeting use case of extremely large dependencies (>1TB, where users won't want to dvc get
/curl
/aws cp
etc). This is only for proof of concept rather than targetting every single possible user config/setup.
The checkpoint cache stuff is a different issue (#390).
performant NFS
May I add it as a relevant example for the Wikipedia page of oxymoron? π I'll follow up later with a comparison of all the solutions we've talked about and some examples.
Ephemeral | Block | Object | File | |
---|---|---|---|---|
Can offer transfer speeds comparable to hard disks? | β | β | π | π΄ |
Can be reused from different machines, one at a time? | π« | β | β | β |
Can be accessed by many machines at the same time? | π« | π« | β | β |
Can be mounted at the CI/CD level? | π« | π« | π | π |
We[citation needed] have been using the word volumes since the beginning of this issue β not to mention iterative/terraform-provider-iterative#89 β but the concept of volumes is tightly related to block-based storage. Unfortunately, block-based storage can't be accessed by several machines at the same time, so our only possible choices are object-based storage and some kinds of distributed file-based storage.
AWS | Azure | GCP | |
---|---|---|---|
Name | S3 | Blob Storage | Cloud Storage |
Alleged speed | 12 GB/s | 12.5 GB/s | N/A |
Mountable with | β s3fs-fuse | β azure-storage-fuse | β gcsfuse |
Transit encryption | β HTTPS | β TLS 1.2 | β TLS 1.3 |
Authentication | Token | Token | Service account |
AWS | Azure | GCP | |
---|---|---|---|
Name | EFS | Files | Filestore |
Alleged speed* | 50 MB/s | 60 MB/s | 100 MB/s |
Mountable with | β NFSv4 | β NFSv4 | β NFSv3 |
Transit encryptionβ | β TLS 1.2 | π« NO | π« N/A |
Authenticationβ | π« NO | π« N/A | π« NO |
While other distributed filesystems like HDFS or Lustre might be a good option in some scenarios, they haven't been widely adopted by the popular public clouds. AWS FSx looks really good, but isn't portable.
* Alleged base speed; it gets better if you store > 10 terabytes in some providers or pay additional burst speed credits. β Not that important if the NFS service can only be accessed through the local network, but that would require ClickOps.
π @iterative/cml, I'll write some FUSE examples as soon as I make sure that nobody has a personal preference for NFS. βοΈ
Note: requires additional discussion and, probably, will be merged with https://github.com/iterative/terraform-provider-iterative/issues/89
Mounting FUSE devices inside a container is still not possible without the SYS_ADMIN
capability and some extra privileges:
Linux already supports unprivileged userspace mounts as per https://github.com/torvalds/linux/commit/4ad769f3c346 and we're just missing support from container runtimes.
In the meantime, we can mount the filesystem at the instance level (on machines) or with additional privileges (on containers), but this could have a negative impact on container isolation.
The question is: does attaching object-based or file-based storage make any sense if we take into account the limitations exposed on iterative/dvc.org#2587?
Attaching this kind of storage would be approximately as practical as pulling/pushing data with dvc
to any of the supported remotes, and the only difference would be that data manipulation would be done in situ without requiring a local scratchpad.
Getting peaks of ~500 MiB/s (yes, the ISO 80000-13 ones) on S3 with a beefy c5a.24xlarge instance after fineβtuning rclone settings.
Closed with https://github.com/iterative/terraform-provider-iterative/pull/237
Client-side sibling of https://github.com/iterative/terraform-provider-iterative/issues/89
Until we have ~https://github.com/iterative/terraform-provider-iterative/issues/107~ https://github.com/iterative/terraform-provider-iterative/issues/123 and https://github.com/iterative/terraform-provider-iterative/issues/89 we could offer some recipes that we have crafted as proposed solutions to some users in the Discord channel. We would need to create those simple scenarios in the docs as a FAQ?