py-why / causaltune

AutoML for causal inference.
Apache License 2.0
205 stars 30 forks source link

Update ray[tune] requirement from ~=1.11.0 to ~=2.4.0 #257

Closed dependabot[bot] closed 1 year ago

dependabot[bot] commented 1 year ago

Updates the requirements on ray[tune] to permit the latest version.

Release notes

Sourced from ray[tune]'s releases.

Ray-2.4.0

Ray 2.4 - Generative AI and LLM support

Over the last few months, we have seen a flurry of innovative activity around generative AI models and large language models (LLM). To continue our effort to ensure Ray provides a pivotal compute substrate for generative AI workloads and addresses the challenges (as explained in our blog series), we have invested engineering efforts in this release to ensure that these open source LLM models and workloads are accessible to the open source community and performant with Ray.

This release includes new examples for training, batch inference, and serving with your own LLM.

Generative AI and LLM Examples

Ray Train enhancements

  • We're introducing the LightningTrainer, allowing you to scale your PyTorch Lightning on Ray. As part of our continued effort for seamless integration and ease of use, we have enhanced and replaced our existing ray_lightning integration, which was widely adopted, with the latest changes to Pytorch Lighting.
  • we’re releasing an AccelerateTrainer, allowing you to run HuggingFace Accelerate and DeepSpeed on Ray with minimal code changes. This Trainer integrates with the rest of the Ray ecosystem—including the ability to run distributed hyperparameter tuning with each trial being a distributed training job.

Ray Data highlights

  • Streaming execution is enabled by default, providing users with a more efficient data processing pipeline that can handle larger datasets and minimize memory consumption. Check out the docs here: (doc)
  • We've implemented asynchronous batch prefetching of Dataset.iter_batches (doc), improving performance by fetching data in parallel while the main thread continues processing, thus reducing waiting time.
  • Support reading SQL databases (doc), enabling users to seamlessly integrate relational databases into their Ray Data workflows.
  • Introduced support for reading WebDataset (doc), a common format for high-performance deep learning training jobs.

Ray Serve highlights

  • Multi-app CLI & REST API support is now available, allowing users to manage multiple applications with different configurations within a single Ray Serve deployment. This simplifies deployment and scaling processes for users with multiple applications. (doc)
  • Enhanced logging and metrics for Serve applications, giving users better visibility into their application's performance and facilitating easier debugging and monitoring. (doc)

Other enhancements

Ray Libraries

Ray AIR

💫Enhancements:

  • Add nightly test for alpa opt 30b inference. (#33419)
  • Add a sanity checking release test for Alpa and ray nightly. (#32995)
  • Add TorchDetectionPredictor (#32199)
  • Add artifact_location, run_name to MLFlow integration (#33641)
  • Add *path properties to Result and ResultGrid (#33410)
  • Make Preprocessor.transform lazy by default (#32872)
  • Make BatchPredictor lazy (#32510, #32796)
  • Use a configurable ray temp directory for the TempFileLock util (#32862)
  • Add collate_fn to iter_torch_batches (#32412)
  • Allow users to pass Callable[[torch.Tensor], torch.Tensor] to TorchVisionTransform (#32383)
  • Automatically move DatasetIterator torch tensors to correct device (#31753)

🔨 Fixes:

... (truncated)

Commits


You can trigger a rebase of this PR by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
> **Note** > Automatic rebases have been disabled on this pull request as it has been open for over 30 days.
dependabot[bot] commented 1 year ago

Superseded by #270.