Open rivershah opened 2 years ago
Hi there,
I understand your concern regarding the current status of the project. While the recent lack of releases or merged PRs might indicate a temporary slowdown, I encourage exploration of alternative pathways for enabling distributed learning in the cloud.
Here are a few suggested pathways for users seeking to leverage distributed learning:
TensorFlow Extended (TFX): TFX is a powerful framework specifically designed to orchestrate end-to-end ML pipelines, including distributed training and serving components.
Apache Spark MLlib: An excellent choice for distributed machine learning, Spark MLlib offers scalable algorithms and tools for large-scale data processing and training.
PyTorch Distributed: PyTorch also provides capabilities for distributed training, enabling users to leverage cloud infrastructure efficiently.
Ray and Ray Tune: Ray is a flexible, high-performance distributed execution framework, while Ray Tune focuses on hyperparameter tuning and distributed experimentation.
Dask: Dask provides scalable computing in Python, including distributed machine learning functionalities.
I recommend evaluating these alternatives based on your specific use case and requirements. Additionally, staying tuned to this repository for updates is always valuable, as the project may undergo future enhancements or renewed activity.
Thank you for your patience and understanding.
Seems like no releases or PRs being merged for sometime now. In case this project is deprecated, what are suggested pathways for a user to enable distributed learning in the cloud please?