IBM / FfDL

Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
https://developer.ibm.com/code/patterns/deploy-and-use-a-multi-framework-deep-learning-platform-on-kubernetes/
Apache License 2.0
692 stars 187 forks source link

external NFS storage support #94

Open cloustone opened 6 years ago

cloustone commented 6 years ago

Hello, @FfDL We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS. The following steps are our adaptions for NFS.

  1. Deploy an external NFS server out of kubernetes.
  2. Add PVs declaration in templates folder
  3. Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment

We are confirming the above method, however, new question already occurred. If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?

Would you please confirm the above method and question, or provide a right solution to us.

Thanks

atinsood commented 6 years ago

@cloustone there's work going on on cleaning that tight integration that we have and we should have something out relatively soon.

the thought process is that you can create a PVC, load all the training data to this PVC and in the manifest file provide a pvc reference id/name similar to the way you provide s3 details in manifest and the learner can mount that pvc rather than the s3 storage and use the data

cloustone commented 6 years ago

@atinsood thanks for your reply. I just used dynamic external storage with NFS to deploy model train. It seems ok.

animeshsingh commented 6 years ago

@cloustone would love to get more details about how you did this. We would love to include a PR with a doc stating how to leverage NFS, with the steps you defined above

"The following steps are our adaptions for NFS.

Deploy an external NFS server out of kubernetes. Add PVs declaration in templates folder Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment"

atinsood commented 6 years ago

@cloustone I just used dynamic external storage with NFS to deploy model train. It seems ok. curious on how you got this going from a technical perspective :)

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training (basically change the https://github.com/IBM/FfDL/blob/master/lcm/service/lcm/learner_deployment_helpers.go#L493 and add the volume mount)

I wonder if you went this route or a different one

cloustone commented 6 years ago

@atinsood Yes, the method is almost same with what you provided.

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.

atinsood commented 6 years ago

@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/

https://github.com/IntelAI/vck

we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.

this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.

cloustone commented 6 years ago

@atinsood Thanks, we will try this method according to our requirement.

Eric-Zhang1990 commented 5 years ago

Hello, @FfDL We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS. The following steps are our adaptions for NFS.

  1. Deploy an external NFS server out of kubernetes.
  2. Add PVs declaration in templates folder
  3. Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment

We are confirming the above method, however, new question already occurred. If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?

Would you please confirm the above method and question, or provide a right solution to us.

Thanks

@cloustone Can you please detailly tell me how to use NFS? I also want to use NFS but I do not know how to use it. Which files do you change and how to change? Thank you very much.

Eric-Zhang1990 commented 5 years ago

@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/

https://github.com/IntelAI/vck

we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.

this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.

@atinsood Do you have add this method into FfDL? Or do you have document about how to use this method in FfDL? Thank you very much.

atinsood commented 5 years ago

@Tomcli @fplk did you try the intel vck approach with ffdl

sboagibm commented 5 years ago

@atinsood @Eric-Zhang1990 No, we do not currently have vck integration in FfDL.

@cloustone said:

and you'd end up accessing the data as you would access local data on those machines.

Which I think just implies a host mount, which I think is enabled in the current FfDL. So you could give that a try.

@cloustone said:

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.

We do have an internal PR that enables use of generic PVCs for training and result volumes. I don't think we need a configmap? The idea is that PVC allocation is done by some other process, and then we just point to the training data and result data volumes by name, in the manifest.

Perhaps we can go ahead and externalize this in the next few days, at least on a branch, and you could give it a try. Let me see what I can do.

Eric-Zhang1990 commented 5 years ago

@sboagibm Thank you for your kind reply. You say "then we just point to the training data and result data volumes by name, in the manifest.", can you give me a example of manifest file using local path of host?

I find a file in "https://github.com/IBM/FfDL/blob/vck-patch/etc/examples/vck-integration.md", what you say is like this manifest file? If it is, can I add multi learners in it?

  Thank you very much.
Eric-Zhang1990 commented 5 years ago

@cloustone @atinsood @sboagibm How to use NFS to store data to start training jobs?? Can you provide more detail docs for us?? Thanks.