dreamquark-ai / tabnet

PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf
https://dreamquark-ai.github.io/tabnet/
MIT License
2.6k stars 482 forks source link

chore(deps): update dependency xgboost to v2 #514

Open renovate[bot] opened 1 year ago

renovate[bot] commented 1 year ago

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
xgboost 0.90 -> 2.1.1 age adoption passing confidence

Release Notes

dmlc/xgboost (xgboost) ### [`v2.1.1`](https://togithub.com/dmlc/xgboost/releases/tag/v2.1.1): 2.1.1 Patch Release [Compare Source](https://togithub.com/dmlc/xgboost/compare/v2.1.0...v2.1.1) The 2.1.1 patch release make the following bug fixes: - \[Dask] Disable `broadcast` in the `scatter` call so that `predict` function won't hang ([#​10632](https://togithub.com/dmlc/xgboost/issues/10632)) by [@​trivialfis](https://togithub.com/trivialfis) - \[Dask] Handle empty partitions correctly ([#​10559](https://togithub.com/dmlc/xgboost/issues/10559)) by [@​trivialfis](https://togithub.com/trivialfis) - Fix federated learning for the encrypted GRPC backend ([#​10503](https://togithub.com/dmlc/xgboost/issues/10503)) by [@​trivialfis](https://togithub.com/trivialfis) - Fix a race condition in column splitter ([#​10572](https://togithub.com/dmlc/xgboost/issues/10572)) by [@​trivialfis](https://togithub.com/trivialfis) - Gracefully handle cases where system files like `/sys/fs/cgroup/cpu.max` are not readable by the user ([#​10623](https://togithub.com/dmlc/xgboost/issues/10623)) by [@​trivialfis](https://togithub.com/trivialfis) - Fix build and C++ tests for FreeBSD ([#​10480](https://togithub.com/dmlc/xgboost/issues/10480)) by [@​hcho3](https://togithub.com/hcho3), [@​trivialfis](https://togithub.com/trivialfis) - Clarify the requirement Pandas 1.2+ ([#​10476](https://togithub.com/dmlc/xgboost/issues/10476)) by [@​hcho3](https://togithub.com/hcho3) - More robust endianness detection in R package build ([#​10642](https://togithub.com/dmlc/xgboost/issues/10642)) by [@​jakirkham](https://togithub.com/jakirkham), [@​hcho3](https://togithub.com/hcho3) In addition, it contains several enhancements: - Publish JVM packages targeting Linux ARM64 ([#​10487](https://togithub.com/dmlc/xgboost/issues/10487)) by [@​hcho3](https://togithub.com/hcho3) - Publish a CPU-only wheel under name `xgboost-cpu` ([#​10603](https://togithub.com/dmlc/xgboost/issues/10603)) by [@​hcho3](https://togithub.com/hcho3) - Support building with CUDA Toolkit 12.5 and latest CCCL ([#​10624](https://togithub.com/dmlc/xgboost/issues/10624), [#​10633](https://togithub.com/dmlc/xgboost/issues/10633), [#​10574](https://togithub.com/dmlc/xgboost/issues/10574)) by [@​hcho3](https://togithub.com/hcho3), [@​trivialfis](https://togithub.com/trivialfis), [@​jakirkham](https://togithub.com/jakirkham) **Full Changelog**: https://github.com/dmlc/xgboost/compare/v2.1.0...v2.1.1 ##### Additional artifacts: You can verify the downloaded packages by running the following command on your Unix shell: ```sh echo " " | shasum -a 256 --check ``` eddbc5200b7c5210f2b8974b9d2a0328a30753416bfb81fdaf5040f4f7abb222 xgboost-2.1.1.tar.gz 3ba5a6e0c609bd5cc0a667d83c57457c06778bece50863e58c8bc1b4eb415fc6 xgboost_r_gpu_linux_2.1.1.tar.gz **Experimental binary packages for R with CUDA enabled** - xgboost_r_gpu_linux\_2.1.1.tar.gz: [Download](https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/release\_2.1.0/xgboost_r_gpu_linux_e36d361674cb1b8fd599da891e1e91a427bb4159.tar.gz) **Source tarball** - xgboost.tar.gz: [Download](https://togithub.com/dmlc/xgboost/releases/download/v2.1.1/xgboost-2.1.1.tar.gz) ### [`v2.1.0`](https://togithub.com/dmlc/xgboost/releases/tag/v2.1.0): Release 2.1.0 stable [Compare Source](https://togithub.com/dmlc/xgboost/compare/v2.0.3...v2.1.0) #### 2.1.0 (2024 Jun 20) We are thrilled to announce the XGBoost 2.1 release. This note will start by summarizing some general changes and then highlighting specific package updates. As we are working on a [new R interface](https://togithub.com/dmlc/xgboost/issues/9810), this release will not include the R package. We'll update the R package as soon as it's ready. Stay tuned! ##### Networking Improvements An important ongoing work for XGBoost, which we've been collaborating on, is to support resilience for improved scaling and federated learning on various platforms. The existing networking library in XGBoost, adopted from the RABIT project, can no longer meet the feature demand. We've revamped the RABIT module in this release to pave the way for future development. The choice of using an in-house version instead of an existing library is due to the active development status with frequent new feature requests like loading extra plugins for federated learning. The new implementation features: - Both CPU and GPU communication (based on NCCL). - A reusable tracker for both the Python package and JVM packages. With the new release, the JVM packages no longer require Python as a runtime dependency. - Supports federated communication patterns for both CPU and GPU. - Supports timeout. The high-level interface parameter is currently hard-coded to 30 minutes, which we plan to improve. - Supports significantly more data types. - Supports thread-based workers. - Improved handling for worker errors, including better error messages when one of the peers dies during training. - Work with IPv6. Currently, this is only supported by the dask interface. - Built-in support for various operations like broadcast, allgatherV, allreduce, etc. Related PRs ([#​9597](https://togithub.com/dmlc/xgboost/issues/9597), [#​9576](https://togithub.com/dmlc/xgboost/issues/9576), [#​9523](https://togithub.com/dmlc/xgboost/issues/9523), [#​9524](https://togithub.com/dmlc/xgboost/issues/9524), [#​9593](https://togithub.com/dmlc/xgboost/issues/9593), [#​9596](https://togithub.com/dmlc/xgboost/issues/9596), [#​9661](https://togithub.com/dmlc/xgboost/issues/9661), [#​10319](https://togithub.com/dmlc/xgboost/issues/10319), [#​10152](https://togithub.com/dmlc/xgboost/issues/10152), [#​10125](https://togithub.com/dmlc/xgboost/issues/10125), [#​10332](https://togithub.com/dmlc/xgboost/issues/10332), [#​10306](https://togithub.com/dmlc/xgboost/issues/10306), [#​10208](https://togithub.com/dmlc/xgboost/issues/10208), [#​10203](https://togithub.com/dmlc/xgboost/issues/10203), [#​10199](https://togithub.com/dmlc/xgboost/issues/10199), [#​9784](https://togithub.com/dmlc/xgboost/issues/9784), [#​9777](https://togithub.com/dmlc/xgboost/issues/9777), [#​9773](https://togithub.com/dmlc/xgboost/issues/9773), [#​9772](https://togithub.com/dmlc/xgboost/issues/9772), [#​9759](https://togithub.com/dmlc/xgboost/issues/9759), [#​9745](https://togithub.com/dmlc/xgboost/issues/9745), [#​9695](https://togithub.com/dmlc/xgboost/issues/9695), [#​9738](https://togithub.com/dmlc/xgboost/issues/9738), [#​9732](https://togithub.com/dmlc/xgboost/issues/9732), [#​9726](https://togithub.com/dmlc/xgboost/issues/9726), [#​9688](https://togithub.com/dmlc/xgboost/issues/9688), [#​9681](https://togithub.com/dmlc/xgboost/issues/9681), [#​9679](https://togithub.com/dmlc/xgboost/issues/9679), [#​9659](https://togithub.com/dmlc/xgboost/issues/9659), [#​9650](https://togithub.com/dmlc/xgboost/issues/9650), [#​9644](https://togithub.com/dmlc/xgboost/issues/9644), [#​9649](https://togithub.com/dmlc/xgboost/issues/9649), [#​9917](https://togithub.com/dmlc/xgboost/issues/9917), [#​9990](https://togithub.com/dmlc/xgboost/issues/9990), [#​10313](https://togithub.com/dmlc/xgboost/issues/10313), [#​10315](https://togithub.com/dmlc/xgboost/issues/10315), [#​10112](https://togithub.com/dmlc/xgboost/issues/10112), [#​9531](https://togithub.com/dmlc/xgboost/issues/9531), [#​10075](https://togithub.com/dmlc/xgboost/issues/10075), [#​9805](https://togithub.com/dmlc/xgboost/issues/9805), [#​10198](https://togithub.com/dmlc/xgboost/issues/10198), [#​10414](https://togithub.com/dmlc/xgboost/issues/10414)). The existing option of using `MPI` in RABIT is removed in the release. ([#​9525](https://togithub.com/dmlc/xgboost/issues/9525)) ##### NCCL is now fetched from PyPI. In the previous version, XGBoost statically linked NCCL, which significantly increased the binary size and led to hitting the PyPI repository limit. With the new release, we have made a significant improvement. The new release can now dynamically load NCCL from an external source, reducing the binary size. For the PyPI package, the `nvidia-nccl-cu12` package will be fetched during installation. With more downstream packages reusing NCCL, we expect the user environments to be slimmer in the future as well. ([#​9796](https://togithub.com/dmlc/xgboost/issues/9796), [#​9804](https://togithub.com/dmlc/xgboost/issues/9804), [#​10447](https://togithub.com/dmlc/xgboost/issues/10447)) ##### Parts of the Python package now require glibc 2.28+ Starting from 2.1.0, XGBoost Python package will be distributed in two variants: - `manylinux_2_28`: for recent Linux distros with glibc 2.28 or newer. This variant comes with all features enabled. - `manylinux2014`: for old Linux distros with glibc older than 2.28. This variant does not support GPU algorithms or federated learning. The `pip` package manager will automatically choose the correct variant depending on your system. Starting from **May 31, 2025**, we will stop distributing the `manylinux2014` variant and exclusively distribute the `manylinux_2_28` variant. We made this decision so that our CI/CD pipeline won't have depend on software components that reached end-of-life (such as CentOS 7). We strongly encourage everyone to migrate to recent Linux distros in order to use future versions of XGBoost. Note. If you want to use GPU algorithms or federated learning on an older Linux distro, you have two alternatives: 1. Upgrade to a recent Linux distro with glibc 2.28+. OR 2. Build XGBoost from the source. ##### Multi-output We continue the work on multi-target and vector leaf in this release: - Revise the support for custom objectives with a new API, `XGBoosterTrainOneIter.` This new function supports strided matrices and CUDA inputs. In addition, custom objectives now return the correct shape for prediction. ([#​9508](https://togithub.com/dmlc/xgboost/issues/9508)) - The `hinge` objective now supports multi-target regression ([#​9850](https://togithub.com/dmlc/xgboost/issues/9850)) - Fix the gain calculation with vector leaf ([#​9978](https://togithub.com/dmlc/xgboost/issues/9978)) - Support graphviz plot for multi-target tree. ([#​10093](https://togithub.com/dmlc/xgboost/issues/10093)) - Fix multi-output with alternating strategies. ([#​9933](https://togithub.com/dmlc/xgboost/issues/9933)) Please note that the feature is still in progress and not suitable for production use. ##### Federated Learning Progress has been made on federated learning with improved support for column-split, including the following updates: - Column split work for both CPU and GPU. In addition, categorical data is now compatible with column split. ([#​9562](https://togithub.com/dmlc/xgboost/issues/9562), [#​9609](https://togithub.com/dmlc/xgboost/issues/9609), [#​9611](https://togithub.com/dmlc/xgboost/issues/9611), [#​9628](https://togithub.com/dmlc/xgboost/issues/9628), [#​9539](https://togithub.com/dmlc/xgboost/issues/9539), [#​9578](https://togithub.com/dmlc/xgboost/issues/9578), [#​9685](https://togithub.com/dmlc/xgboost/issues/9685), [#​9623](https://togithub.com/dmlc/xgboost/issues/9623), [#​9613](https://togithub.com/dmlc/xgboost/issues/9613), [#​9511](https://togithub.com/dmlc/xgboost/issues/9511), [#​9384](https://togithub.com/dmlc/xgboost/issues/9384), [#​9595](https://togithub.com/dmlc/xgboost/issues/9595)) - The use of UBJson to serialize split entries for column split has been implemented, aiding vector-leaf with column-based data split. ([#​10059](https://togithub.com/dmlc/xgboost/issues/10059), [#​10055](https://togithub.com/dmlc/xgboost/issues/10055), [#​9702](https://togithub.com/dmlc/xgboost/issues/9702)) - Documentation and small fixes. ([#​9610](https://togithub.com/dmlc/xgboost/issues/9610), [#​9552](https://togithub.com/dmlc/xgboost/issues/9552), [#​9614](https://togithub.com/dmlc/xgboost/issues/9614), [#​9867](https://togithub.com/dmlc/xgboost/issues/9867)) ##### Ongoing work for SYCL support. XGBoost is developing a SYCL plugin for SYCL devices, starting with the `hist` tree method. ([#​10216](https://togithub.com/dmlc/xgboost/issues/10216), [#​9800](https://togithub.com/dmlc/xgboost/issues/9800), [#​10311](https://togithub.com/dmlc/xgboost/issues/10311), [#​9691](https://togithub.com/dmlc/xgboost/issues/9691), [#​10269](https://togithub.com/dmlc/xgboost/issues/10269), [#​10251](https://togithub.com/dmlc/xgboost/issues/10251), [#​10222](https://togithub.com/dmlc/xgboost/issues/10222), [#​10174](https://togithub.com/dmlc/xgboost/issues/10174), [#​10080](https://togithub.com/dmlc/xgboost/issues/10080), [#​10057](https://togithub.com/dmlc/xgboost/issues/10057), [#​10011](https://togithub.com/dmlc/xgboost/issues/10011), [#​10138](https://togithub.com/dmlc/xgboost/issues/10138), [#​10119](https://togithub.com/dmlc/xgboost/issues/10119), [#​10045](https://togithub.com/dmlc/xgboost/issues/10045), [#​9876](https://togithub.com/dmlc/xgboost/issues/9876), [#​9846](https://togithub.com/dmlc/xgboost/issues/9846), [#​9682](https://togithub.com/dmlc/xgboost/issues/9682)) XGBoost now supports launchable inference on SYCL devices, and work on adding SYCL support for training is ongoing. Looking ahead, we plan to complete the training in the coming releases and then focus on improving test coverage for SYCL, particularly for Python tests. ##### Optimizations - Implement column sampler in CUDA for GPU-based tree methods. This helps us get faster training time when column sampling is employed ([#​9785](https://togithub.com/dmlc/xgboost/issues/9785)) - CMake LTO and CUDA arch ([#​9677](https://togithub.com/dmlc/xgboost/issues/9677)) - Small optimization to external memory with a thread pool. This reduces the number of threads launched during iteration. ([#​9605](https://togithub.com/dmlc/xgboost/issues/9605), [#​10288](https://togithub.com/dmlc/xgboost/issues/10288), [#​10374](https://togithub.com/dmlc/xgboost/issues/10374)) ##### Deprecation and breaking changes Package-specific breaking changes are outlined in respective sections. Here we list general breaking changes in this release: - The command line interface is deprecated due to the increasing complexity of the machine learning ecosystem. Building a machine learning model using a command shell is no longer feasible and could mislead newcomers. ([#​9485](https://togithub.com/dmlc/xgboost/issues/9485)) - `Universal binary JSON` is now the default format for saving models ([#​9947](https://togithub.com/dmlc/xgboost/issues/9947), [#​9958](https://togithub.com/dmlc/xgboost/issues/9958), [#​9954](https://togithub.com/dmlc/xgboost/issues/9954), [#​9955](https://togithub.com/dmlc/xgboost/issues/9955)). See [https://github.com/dmlc/xgboost/issues/7547](https://togithub.com/dmlc/xgboost/issues/7547) for more info. - The `XGBoosterGetModelRaw` is now removed after deprecation in 1.6. ([#​9617](https://togithub.com/dmlc/xgboost/issues/9617)) - Drop support for loading remote files. Users are encouraged to use dedicated libraries to fetch remote content. ([#​9504](https://togithub.com/dmlc/xgboost/issues/9504)) - Remove the dense libsvm parser plugin. This plugin is never tested or documented ([#​9799](https://togithub.com/dmlc/xgboost/issues/9799)) - `XGDMatrixSetDenseInfo` and `XGDMatrixSetUIntInfo` are now deprecated. Use the array interface based alternatives instead. ##### Features This section lists some new features that are general to all language bindings. For package-specific changes, please visit respective sections. - Adopt a new XGBoost logo ([#​10270](https://togithub.com/dmlc/xgboost/issues/10270)) - Now supports dataframe data format in native XGBoost. This improvement enhances performance and reduces memory usage when working with dataframe-based structures such as pandas, arrow, and R dataframe. ([#​9828](https://togithub.com/dmlc/xgboost/issues/9828), [#​9616](https://togithub.com/dmlc/xgboost/issues/9616), [#​9905](https://togithub.com/dmlc/xgboost/issues/9905)) - Change default metric for gamma regression to `deviance`. ([#​9757](https://togithub.com/dmlc/xgboost/issues/9757)) - Normalization for learning to rank is now optional with the introduction of the new `lambdarank_normalization` parameter. ([#​10094](https://togithub.com/dmlc/xgboost/issues/10094)) - Contribution prediction with `QuantileDMatrix` on CPU. ([#​10043](https://togithub.com/dmlc/xgboost/issues/10043)) - XGBoost on macos no longer bundles OpenMP runtime. Users can install the latest runtime from their dependency manager of choice. ([https://github.com/dmlc/xgboost/pull/10440](https://togithub.com/dmlc/xgboost/pull/10440)). Along with which, JVM packages on MacoOS are now built with OpenMP support ([https://github.com/dmlc/xgboost/pull/10449](https://togithub.com/dmlc/xgboost/pull/10449)). ##### Bug fixes - Fix training with categorical data from external memory. ([https://github.com/dmlc/xgboost/pull/10433](https://togithub.com/dmlc/xgboost/pull/10433)) - Fix compilation with CTK-12. ([#​10123](https://togithub.com/dmlc/xgboost/issues/10123)) - Fix inconsistent runtime library on Windows. ([#​10404](https://togithub.com/dmlc/xgboost/issues/10404)) - Fix default metric configuration. ([#​9575](https://togithub.com/dmlc/xgboost/issues/9575)) - Fix feature names with special characters. ([#​9923](https://togithub.com/dmlc/xgboost/issues/9923)) - Fix global configuration for external memory training. ([#​10173](https://togithub.com/dmlc/xgboost/issues/10173)) - Disable column sample by node for the exact tree method. ([#​10083](https://togithub.com/dmlc/xgboost/issues/10083)) - Fix the `FieldEntry` constructor specialization syntax error ([#​9980](https://togithub.com/dmlc/xgboost/issues/9980)) - Fix pairwise objective with NDCG metric along with custom gain. ([#​10100](https://togithub.com/dmlc/xgboost/issues/10100)) - Fix the default value for `lambdarank_pair_method`. ([#​10098](https://togithub.com/dmlc/xgboost/issues/10098)) - Fix UBJSON with boolean values. No existing code is affected by this fix. ([#​10054](https://togithub.com/dmlc/xgboost/issues/10054)) - Be more lenient on floating point errors for AUC. This prevents the AUC > 1.0 error. ([#​10264](https://togithub.com/dmlc/xgboost/issues/10264)) - Check support status for categorical features. This prevents `gblinear` from treating categorical features as numerical. ([#​9946](https://togithub.com/dmlc/xgboost/issues/9946)) ##### Document Here is a list of documentation changes not specific to any XGBoost package. - A new coarse map for XGBoost features to assist development. ([#​10310](https://togithub.com/dmlc/xgboost/issues/10310)) - New language binding consistency guideline. ([#​9755](https://togithub.com/dmlc/xgboost/issues/9755), [#​9866](https://togithub.com/dmlc/xgboost/issues/9866)) - Fixes, cleanups, small updates ([#​9501](https://togithub.com/dmlc/xgboost/issues/9501), [#​9988](https://togithub.com/dmlc/xgboost/issues/9988), [#​10023](https://togithub.com/dmlc/xgboost/issues/10023), [#​10013](https://togithub.com/dmlc/xgboost/issues/10013), [#​10143](https://togithub.com/dmlc/xgboost/issues/10143), [#​9904](https://togithub.com/dmlc/xgboost/issues/9904), [#​10179](https://togithub.com/dmlc/xgboost/issues/10179), [#​9781](https://togithub.com/dmlc/xgboost/issues/9781), [#​10340](https://togithub.com/dmlc/xgboost/issues/10340), [#​9658](https://togithub.com/dmlc/xgboost/issues/9658), [#​10182](https://togithub.com/dmlc/xgboost/issues/10182), [#​9822](https://togithub.com/dmlc/xgboost/issues/9822)) - Update document for parameters ([#​9900](https://togithub.com/dmlc/xgboost/issues/9900)) - Brief introduction to `base_score`. ([#​9882](https://togithub.com/dmlc/xgboost/issues/9882)) - Mention data consistency for categorical features. ([#​9678](https://togithub.com/dmlc/xgboost/issues/9678)) ##### Python package - Dask Other than the changes in networking, we have some optimizations and document updates in dask: - Filter models on workers instead of clients; this prevents an OOM error on the client machine. ([#​9518](https://togithub.com/dmlc/xgboost/issues/9518)) - Users are now encouraged to use `from xgboost import dask` instead of `import xgboost.dask` to avoid drawing in unnecessary dependencies for non-dask users. ([#​9742](https://togithub.com/dmlc/xgboost/issues/9742)) - Add seed to demos. ([#​10009](https://togithub.com/dmlc/xgboost/issues/10009)) - New document for using dask XGBoost with k8s. ([#​10271](https://togithub.com/dmlc/xgboost/issues/10271)) - Workaround potentially unaligned pointer from an empty partition. ([#​10418](https://togithub.com/dmlc/xgboost/issues/10418)) - Workaround a race condition in the latest dask. ([#​10419](https://togithub.com/dmlc/xgboost/issues/10419)) - Add typing to dask demos. ([#​10207](https://togithub.com/dmlc/xgboost/issues/10207)) - PySpark PySpark has several new features along with some small fixes: - Support stage-level scheduling for training on various platforms, including yarn/k8s. ([#​9519](https://togithub.com/dmlc/xgboost/issues/9519), [#​10209](https://togithub.com/dmlc/xgboost/issues/10209), [#​9786](https://togithub.com/dmlc/xgboost/issues/9786), [#​9727](https://togithub.com/dmlc/xgboost/issues/9727)) - Support GPU-based transform methods ([#​9542](https://togithub.com/dmlc/xgboost/issues/9542)) - Avoid expensive repartition when appropriate. ([#​10408](https://togithub.com/dmlc/xgboost/issues/10408)) - Refactor the logging and the GPU code path ([#​10077](https://togithub.com/dmlc/xgboost/issues/10077), 9724) - Sort workers by task ID. This helps the PySpark interface obtain deterministic results. ([#​10220](https://togithub.com/dmlc/xgboost/issues/10220)) - Fix PySpark with `verbosity=3`. ([#​10172](https://togithub.com/dmlc/xgboost/issues/10172)) - Fix spark estimator doc. ([#​10066](https://togithub.com/dmlc/xgboost/issues/10066)) - Rework transform for improved code reusing. ([#​9292](https://togithub.com/dmlc/xgboost/issues/9292)) - Breaking changes For the Python package, `eval_metric`, `early_stopping_rounds`, and `callbacks` from now removed from the `fit` method in the sklearn interface. They were deprecated in 1.6. Use the parameters with the same name in constructors instead. ([#​9986](https://togithub.com/dmlc/xgboost/issues/9986)) - Features Following is a list of new features in the Python package: - Support sample weight in sklearn custom objective. ([#​10050](https://togithub.com/dmlc/xgboost/issues/10050)) - New supported data types, including `cudf.pandas` ([#​9602](https://togithub.com/dmlc/xgboost/issues/9602)), `torch.Tensor` ([#​9971](https://togithub.com/dmlc/xgboost/issues/9971)), and more scipy types ([#​9881](https://togithub.com/dmlc/xgboost/issues/9881)). - Support pandas 2.2 and numpy 2.0. ([#​10266](https://togithub.com/dmlc/xgboost/issues/10266), [#​9557](https://togithub.com/dmlc/xgboost/issues/9557), [#​10252](https://togithub.com/dmlc/xgboost/issues/10252), [#​10175](https://togithub.com/dmlc/xgboost/issues/10175)) - Support the latest rapids including rmm. ([#​10435](https://togithub.com/dmlc/xgboost/issues/10435)) - Improved data cache option in data iterator. ([#​10286](https://togithub.com/dmlc/xgboost/issues/10286)) - Accept numpy generators as `random_state` ([#​9743](https://togithub.com/dmlc/xgboost/issues/9743)) - Support returning base score as intercept in the sklearn interface. ([#​9486](https://togithub.com/dmlc/xgboost/issues/9486)) - Support arrow through pandas ext types. This is built on top of the new DataFrame API in XGBoost. See general features for more info. ([#​9612](https://togithub.com/dmlc/xgboost/issues/9612)) - Handle np integer in model slice and prediction. ([#​10007](https://togithub.com/dmlc/xgboost/issues/10007)) - Improved sklearn tags support. ([#​10230](https://togithub.com/dmlc/xgboost/issues/10230)) - The base image for building Linux binary wheels is updated to rockylinux8. ([#​10399](https://togithub.com/dmlc/xgboost/issues/10399)) - Improved handling for float128. ([#​10322](https://togithub.com/dmlc/xgboost/issues/10322)) - Fixes - Fix `DMatrix` with `None` input. ([#​10052](https://togithub.com/dmlc/xgboost/issues/10052)) - Fix native library discovery logic. ([#​9712](https://togithub.com/dmlc/xgboost/issues/9712), [#​9860](https://togithub.com/dmlc/xgboost/issues/9860)) - Fix using categorical data with the score function for the ranker. ([#​9753](https://togithub.com/dmlc/xgboost/issues/9753)) - Document - Clarify the effect of `enable_categorical` ([#​9877](https://togithub.com/dmlc/xgboost/issues/9877), [#​9884](https://togithub.com/dmlc/xgboost/issues/9884)) - Update the Python introduction. ([#​10033](https://togithub.com/dmlc/xgboost/issues/10033)) - Fixes. ([#​10058](https://togithub.com/dmlc/xgboost/issues/10058), [#​9991](https://togithub.com/dmlc/xgboost/issues/9991), [#​9573](https://togithub.com/dmlc/xgboost/issues/9573)) ##### JVM package Here is a list of JVM-specific changes. Like the PySpark package, the JVM package also gains stage-level scheduling. - Features and related documents - \[breaking] Remove rabit checkpoint. ([#​9599](https://togithub.com/dmlc/xgboost/issues/9599)) - Support stage-level scheduling ([#​9775](https://togithub.com/dmlc/xgboost/issues/9775)) - Allow JVM-Package to access inplace predict method ([#​9167](https://togithub.com/dmlc/xgboost/issues/9167)) - Support JDK 17 for test ([#​9959](https://togithub.com/dmlc/xgboost/issues/9959)) - Various dependency updates.([#​10211](https://togithub.com/dmlc/xgboost/issues/10211), [#​10210](https://togithub.com/dmlc/xgboost/issues/10210), [#​10217](https://togithub.com/dmlc/xgboost/issues/10217), [#​10156](https://togithub.com/dmlc/xgboost/issues/10156), [#​10070](https://togithub.com/dmlc/xgboost/issues/10070), [#​9809](https://togithub.com/dmlc/xgboost/issues/9809), [#​9517](https://togithub.com/dmlc/xgboost/issues/9517), [#​10235](https://togithub.com/dmlc/xgboost/issues/10235), [#​10276](https://togithub.com/dmlc/xgboost/issues/10276), [#​9331](https://togithub.com/dmlc/xgboost/issues/9331), [#​10335](https://togithub.com/dmlc/xgboost/issues/10335), [#​10309](https://togithub.com/dmlc/xgboost/issues/10309), [#​10240](https://togithub.com/dmlc/xgboost/issues/10240), [#​10244](https://togithub.com/dmlc/xgboost/issues/10244), [#​10260](https://togithub.com/dmlc/xgboost/issues/10260), [#​9489](https://togithub.com/dmlc/xgboost/issues/9489), [#​9326](https://togithub.com/dmlc/xgboost/issues/9326), [#​10294](https://togithub.com/dmlc/xgboost/issues/10294), [#​10197](https://togithub.com/dmlc/xgboost/issues/10197), [#​10196](https://togithub.com/dmlc/xgboost/issues/10196), [#​10193](https://togithub.com/dmlc/xgboost/issues/10193), [#​10202](https://togithub.com/dmlc/xgboost/issues/10202), [#​10191](https://togithub.com/dmlc/xgboost/issues/10191), [#​10188](https://togithub.com/dmlc/xgboost/issues/10188), [#​9328](https://togithub.com/dmlc/xgboost/issues/9328), [#​9311](https://togithub.com/dmlc/xgboost/issues/9311), [#​9951](https://togithub.com/dmlc/xgboost/issues/9951), [#​10151](https://togithub.com/dmlc/xgboost/issues/10151), [#​9827](https://togithub.com/dmlc/xgboost/issues/9827), [#​9820](https://togithub.com/dmlc/xgboost/issues/9820), [#​10253](https://togithub.com/dmlc/xgboost/issues/10253)) - Update and fixes for document. ([#​9752](https://togithub.com/dmlc/xgboost/issues/9752), [#​10385](https://togithub.com/dmlc/xgboost/issues/10385)) - Bug Fixes - Fixes memory leak in error handling. ([#​10307](https://togithub.com/dmlc/xgboost/issues/10307)) - Fixes group col for GPU packages ([#​10254](https://togithub.com/dmlc/xgboost/issues/10254)) ##### Additional artifacts: You can verify the downloaded packages by running the following command on your Unix shell: ```sh echo " " | shasum -a 256 --check ``` 28bec8e821b1fefcea722d96add66024adba399063f723bc5c815f7af4a5f5e4 xgboost-2.1.0.tar.gz 60c715d8c97ef710185469b27f30303b6efa655600d035963f96e6acf65f4dac xgboost_r_gpu_linux_2.1.0.tar.gz **Experimental binary packages for R with CUDA enabled** - xgboost_r_gpu_linux\_2.1.0.tar.gz: [Download](https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/release\_2.1.0/xgboost_r_gpu_linux\_213ebf7796b757448dfa2cfba532074696fa1524.tar.gz) **Source tarball** - xgboost.tar.gz: [Download](https://togithub.com/dmlc/xgboost/releases/download/v2.1.0/xgboost-2.1.0.tar.gz) ### [`v2.0.3`](https://togithub.com/dmlc/xgboost/releases/tag/v2.0.3): 2.0.3 Patch Release [Compare Source](https://togithub.com/dmlc/xgboost/compare/v2.0.2...v2.0.3) The 2.0.3 patch release make the following bug fixes: - \[backport]\[sklearn] Fix loading model attributes. ([#​9808](https://togithub.com/dmlc/xgboost/issues/9808)) by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9880](https://togithub.com/dmlc/xgboost/pull/9880) - \[backport]\[py] Use the first found native library. ([#​9860](https://togithub.com/dmlc/xgboost/issues/9860)) by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9879](https://togithub.com/dmlc/xgboost/pull/9879) - \[backport] \[CI] Upload libxgboost4j.dylib (M1) to S3 bucket by [@​hcho3](https://togithub.com/hcho3) in [https://github.com/dmlc/xgboost/pull/9887](https://togithub.com/dmlc/xgboost/pull/9887) - \[jvm-packages] Fix POM for xgboost-jvm metapackage by [@​hcho3](https://togithub.com/hcho3) in [https://github.com/dmlc/xgboost/pull/9893](https://togithub.com/dmlc/xgboost/pull/9893) [https://github.com/dmlc/xgboost/pull/9897](https://togithub.com/dmlc/xgboost/pull/9897) **Full Changelog**: https://github.com/dmlc/xgboost/compare/v2.0.2...v2.0.3 ##### Additional artifacts: You can verify the downloaded packages by running the following command on your Unix shell: ```sh echo " " | shasum -a 256 --check ``` 7c4bd1cf6162d335fd20a8168a54dd11508342f82fbf381a80c02ac57be0bce4 xgboost-2.0.3.tar.gz d0c3499504133a8ea0043da2974c51cc71aae792f0719080bc227d7add8fb881 xgboost_r_gpu_win64_2.0.3.tar.gz ee47da5b21231965b1f054d191a5418543377f4ba0d0615a593a6f99d1832ca1 xgboost_r_gpu_linux_2.0.3.tar.gz **Experimental binary packages for R with CUDA enabled** - xgboost_r_gpu_linux\_2.0.3.tar.gz: [Download](https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/release\_2.0.0/xgboost_r_gpu_linux\_82d846bbeb83c652a0b1dff0e3519e67569c4a3d.tar.gz) - xgboost_r_gpu_win64\_2.0.3.tar.gz: [Download](https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/release\_2.0.0/xgboost_r_gpu_win64\_82d846bbeb83c652a0b1dff0e3519e67569c4a3d.tar.gz) ### [`v2.0.2`](https://togithub.com/dmlc/xgboost/releases/tag/v2.0.2): 2.0.2 Patch Release [Compare Source](https://togithub.com/dmlc/xgboost/compare/v2.0.1...v2.0.2) The 2.0.2 patch releases make the following bug fixes: - \[jvm-packages] Add Scala version suffix to xgboost-jvm package ([#​9776](https://togithub.com/dmlc/xgboost/issues/9776)). The JVM packages had incorrect metadata, and the 2.0.2 patch version fixes the metadata. - \[backport] Fix using categorical data with the ranker. ([#​9753](https://togithub.com/dmlc/xgboost/issues/9753)) ### [`v2.0.1`](https://togithub.com/dmlc/xgboost/releases/tag/v2.0.1): 2.0.1 Patch Release [Compare Source](https://togithub.com/dmlc/xgboost/compare/v2.0.0...v2.0.1) This is a patch release for bug fixes. #### Bug fixes - Support pandas 2.1.0. by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9655](https://togithub.com/dmlc/xgboost/pull/9655) - Fix default metric configuration. by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9590](https://togithub.com/dmlc/xgboost/pull/9590) - \[R] Fix method name. by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9592](https://togithub.com/dmlc/xgboost/pull/9592) - Use array interface for testing NumPy arrays. by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9635](https://togithub.com/dmlc/xgboost/pull/9635) - Workaround Apple clang issue. by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9636](https://togithub.com/dmlc/xgboost/pull/9636) - Add support for cgroupv2. by [@​trivialfis](https://togithub.com/trivialfis) in [https://github.com/dmlc/xgboost/pull/9656](https://togithub.com/dmlc/xgboost/pull/9656) - Fix build for GCC 8.x by [@​hcho3](https://togithub.com/hcho3) in [https://github.com/dmlc/xgboost/pull/9670](https://togithub.com/dmlc/xgboost/pull/9670) - \[pyspark] Support stage-level scheduling by [@​wbo4958](https://togithub.com/wbo4958) in [https://github.com/dmlc/xgboost/pull/9686](https://togithub.com/dmlc/xgboost/pull/9686) - Fix build for AppleClang 11 by [@​hcho3](https://togithub.com/hcho3) in [https://github.com/dmlc/xgboost/pull/9684](https://togithub.com/dmlc/xgboost/pull/9684) - Fix libpath logic for Windows by [@​hcho3](https://togithub.com/hcho3) in [https://github.com/dmlc/xgboost/pull/9687](https://togithub.com/dmlc/xgboost/pull/9687), [https://github.com/dmlc/xgboost/pull/9711](https://togithub.com/dmlc/xgboost/pull/9711) - Remove hard dependency on libjvm by [@​hcho3](https://togithub.com/hcho3) in [https://github.com/dmlc/xgboost/pull/9705](https://togithub.com/dmlc/xgboost/pull/9705) In addition, this is the first release where the JVM package is distributed with native support for Apple Silicon. ##### Additional artifacts: You can verify the downloaded packages by running the following command on your Unix shell: ```sh echo " " | shasum -a 256 --check ``` 529e9d0f88c2a7abae833f05b7d1e7e7ce01de20481ea60f6ebb6eb7fc96ba69 xgboost.tar.gz 25342c91e7cda98b1362b70282b286c2e4f3e996b518fb590c1303f53f39f188 xgboost_r_gpu_win64_2.0.1.tar.gz 3d8cde1160ab135c393b8092ce0475709dff318024022b735a253d968f9711b3 xgboost_r_gpu_linux_2.0.1.tar.gz **Experimental binary packages for R with CUDA enabled** - xgboost_r_gpu_linux\_2.0.1.tar.gz: [Download](https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/release\_2.0.0/xgboost_r_gpu_linux_a408254c2f0c4a39a04430f9894579038414cb31.tar.gz) - xgboost_r_gpu_win64\_2.0.1.tar.gz: [Download](https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/release\_2.0.0/xgboost_r_gpu_win64\_a408254c2f0c4a39a04430f9894579038414cb31.tar.gz) **Source tarball** - xgboost.tar.gz: [Download](https://togithub.com/dmlc/xgboost/releases/download/v2.0.1/xgboost-2.0.1.tar.gz) ### [`v2.0.0`](https://togithub.com/dmlc/xgboost/releases/tag/v2.0.0): Release 2.0.0 stable [Compare Source](https://togithub.com/dmlc/xgboost/compare/v1.7.6...v2.0.0) ##### 2.0.0 (2023 Sep 12) We are excited to announce the release of XGBoost 2.0. This note will begin by covering some overall changes and then highlight specific updates to the package. ##### Initial work on multi-target trees with vector-leaf outputs We have been working on vector-leaf tree models for multi-target regression, multi-label classification, and multi-class classification in version 2.0. Previously, XGBoost would build a separate model for each target. However, with this new feature that's still being developed, XGBoost can build one tree for all targets. The feature has multiple benefits and trade-offs compared to the existing approach. It can help prevent overfitting, produce smaller models, and build trees that consider the correlation between targets. In addition, users can combine vector leaf and scalar leaf trees during a training session using a callback. Please note that the feature is still a working in progress, and many parts are not yet available. See [#​9043](https://togithub.com/dmlc/xgboost/issues/9043) for the current status. Related PRs: ([#​8538](https://togithub.com/dmlc/xgboost/issues/8538), [#​8697](https://togithub.com/dmlc/xgboost/issues/8697), [#​8902](https://togithub.com/dmlc/xgboost/issues/8902), [#​8884](https://togithub.com/dmlc/xgboost/issues/8884), [#​8895](https://togithub.com/dmlc/xgboost/issues/8895), [#​8898](https://togithub.com/dmlc/xgboost/issues/8898), [#​8612](https://togithub.com/dmlc/xgboost/issues/8612), [#​8652](https://togithub.com/dmlc/xgboost/issues/8652), [#​8698](https://togithub.com/dmlc/xgboost/issues/8698), [#​8908](https://togithub.com/dmlc/xgboost/issues/8908), [#​8928](https://togithub.com/dmlc/xgboost/issues/8928), [#​8968](https://togithub.com/dmlc/xgboost/issues/8968), [#​8616](https://togithub.com/dmlc/xgboost/issues/8616), [#​8922](https://togithub.com/dmlc/xgboost/issues/8922), [#​8890](https://togithub.com/dmlc/xgboost/issues/8890), [#​8872](https://togithub.com/dmlc/xgboost/issues/8872), [#​8889](https://togithub.com/dmlc/xgboost/issues/8889), [#​9509](https://togithub.com/dmlc/xgboost/issues/9509)) Please note that, only the `hist` (default) tree method on CPU can be used for building vector leaf trees at the moment. ##### New `device` parameter. A new `device` parameter is set to replace the existing `gpu_id`, `gpu_hist`, `gpu_predictor`, `cpu_predictor`, `gpu_coord_descent`, and the PySpark specific parameter `use_gpu`. Onward, users need only the `device` parameter to select which device to run along with the ordinal of the device. For more information, please see our document page (https://xgboost.readthedocs.io/en/stable/parameter.html#general-parameters) . For example, with `device="cuda", tree_method="hist"`, XGBoost will run the `hist` tree method on GPU. ([#​9363](https://togithub.com/dmlc/xgboost/issues/9363), [#​8528](https://togithub.com/dmlc/xgboost/issues/8528), [#​8604](https://togithub.com/dmlc/xgboost/issues/8604), [#​9354](https://togithub.com/dmlc/xgboost/issues/9354), [#​9274](https://togithub.com/dmlc/xgboost/issues/9274), [#​9243](https://togithub.com/dmlc/xgboost/issues/9243), [#​8896](https://togithub.com/dmlc/xgboost/issues/8896), [#​9129](https://togithub.com/dmlc/xgboost/issues/9129), [#​9362](https://togithub.com/dmlc/xgboost/issues/9362), [#​9402](https://togithub.com/dmlc/xgboost/issues/9402), [#​9385](https://togithub.com/dmlc/xgboost/issues/9385), [#​9398](https://togithub.com/dmlc/xgboost/issues/9398), [#​9390](https://togithub.com/dmlc/xgboost/issues/9390), [#​9386](https://togithub.com/dmlc/xgboost/issues/9386), [#​9412](https://togithub.com/dmlc/xgboost/issues/9412), [#​9507](https://togithub.com/dmlc/xgboost/issues/9507), [#​9536](https://togithub.com/dmlc/xgboost/issues/9536)). The old behavior of `gpu_hist` is preserved but deprecated. In addition, the `predictor` parameter is removed. ##### `hist` is now the default tree method Starting from 2.0, the `hist` tree method will be the default. In previous versions, XGBoost chooses `approx` or `exact` depending on the input data and training environment. The new default can help XGBoost train models more efficiently and consistently. ([#​9320](https://togithub.com/dmlc/xgboost/issues/9320), [#​9353](https://togithub.com/dmlc/xgboost/issues/9353)) ##### GPU-based approx tree method There's initial support for using the `approx` tree method on GPU. The performance of the `approx` is not yet well optimized but is feature complete except for the JVM packages. It can be accessed through the use of the parameter combination `device="cuda", tree_method="approx"`. ([#​9414](https://togithub.com/dmlc/xgboost/issues/9414), [#​9399](https://togithub.com/dmlc/xgboost/issues/9399), [#​9478](https://togithub.com/dmlc/xgboost/issues/9478)). Please note that the Scala-based Spark interface is not yet supported. ##### Optimize and bound the size of the histogram on CPU, to control memory footprint XGBoost has a new parameter `max_cached_hist_node` for users to limit the CPU cache size for histograms. It can help prevent XGBoost from caching histograms too aggressively. Without the cache, performance is likely to decrease. However, the size of the cache grows exponentially with the depth of the tree. The limit can be crucial when growing deep trees. In most cases, users need not configure this parameter as it does not affect the model's accuracy. ([#​9455](https://togithub.com/dmlc/xgboost/issues/9455), [#​9441](https://togithub.com/dmlc/xgboost/issues/9441), [#​9440](https://togithub.com/dmlc/xgboost/issues/9440), [#​9427](https://togithub.com/dmlc/xgboost/issues/9427), [#​9400](https://togithub.com/dmlc/xgboost/issues/9400)). Along with the cache limit, XGBoost also reduces the memory usage of the `hist` and `approx` tree method on distributed systems by cutting the size of the cache by half. ([#​9433](https://togithub.com/dmlc/xgboost/issues/9433)) ##### Improved external memory support There is some exciting development around external memory support in XGBoost. It's still an experimental feature, but the performance has been significantly improved with the default `hist` tree method. We replaced the old file IO logic with memory map. In addition to performance, we have reduced CPU memory usage and added extensive documentation. Beginning from 2.0.0, we encourage users to try it with the `hist` tree method when the memory saving by `QuantileDMatrix` is not sufficient. ([#​9361](https://togithub.com/dmlc/xgboost/issues/9361), [#​9317](https://togithub.com/dmlc/xgboost/issues/9317), [#​9282](https://togithub.com/dmlc/xgboost/issues/9282), [#​9315](https://togithub.com/dmlc/xgboost/issues/9315), [#​8457](https://togithub.com/dmlc/xgboost/issues/8457)) ##### Learning to rank We created a brand-new implementation for the learning-to-rank task. With the latest version, XGBoost gained a set of new features for ranking task including: - A new parameter `lambdarank_pair_method` for choosing the pair construction strategy. - A new parameter `lambdarank_num_pair_per_sample` for controlling the number of samples for each group. - An experimental implementation of unbiased learning-to-rank, which can be accessed using the `lambdarank_unbiased` parameter. - Support for custom gain function with `NDCG` using the `ndcg_exp_gain` parameter. - Deterministic GPU computation for all objectives and metrics. - `NDCG` is now the default objective function. - Improved performance of metrics using caches. - Support scikit-learn utilities for `XGBRanker`. - Extensive documentation on how learning-to-rank works with XGBoost. For more information, please see the [tutorial](https://xgboost.readthedocs.io/en/latest/tutorials/learning_to_rank.html). Related PRs: ([#​8771](https://togithub.com/dmlc/xgboost/issues/8771), [#​8692](https://togithub.com/dmlc/xgboost/issues/8692), [#​8783](https://togithub.com/dmlc/xgboost/issues/8783), [#​8789](https://togithub.com/dmlc/xgboost/issues/8789), [#​8790](https://togithub.com/dmlc/xgboost/issues/8790), [#​8859](https://togithub.com/dmlc/xgboost/issues/8859), [#​8887](https://togithub.com/dmlc/xgboost/issues/8887), [#​8893](https://togithub.com/dmlc/xgboost/issues/8893), [#​8906](https://togithub.com/dmlc/xgboost/issues/8906), [#​8931](https://togithub.com/dmlc/xgboost/issues/8931), [#​9075](https://togithub.com/dmlc/xgboost/issues/9075), [#​9015](https://togithub.com/dmlc/xgboost/issues/9015), [#​9381](https://togithub.com/dmlc/xgboost/issues/9381), [#​9336](https://togithub.com/dmlc/xgboost/issues/9336), [#​8822](https://togithub.com/dmlc/xgboost/issues/8822), [#​9222](https://togithub.com/dmlc/xgboost/issues/9222), [#​8984](https://togithub.com/dmlc/xgboost/issues/8984), [#​8785](https://togithub.com/dmlc/xgboost/issues/8785), [#​8786](https://togithub.com/dmlc/xgboost/issues/8786), [#​8768](https://togithub.com/dmlc/xgboost/issues/8768)) ##### Automatically estimated intercept In the previous version, `base_score` was a constant that could be set as a training parameter. In the new version, XGBoost can automatically estimate this parameter based on input labels for optimal accuracy. ([#​8539](https://togithub.com/dmlc/xgboost/issues/8539), [#​8498](https://togithub.com/dmlc/xgboost/issues/8498), [#​8272](https://togithub.com/dmlc/xgboost/issues/8272), [#​8793](https://togithub.com/dmlc/xgboost/issues/8793), [#​8607](https://togithub.com/dmlc/xgboost/issues/8607)) ##### Quantile regression The XGBoost algorithm now supports quantile regression, which involves minimizing the quantile loss (also called "pinball loss"). Furthermore, XGBoost allows for training with multiple target quantiles simultaneously with one tree per quantile. ([#​8775](https://togithub.com/dmlc/xgboost/issues/8775), [#​8761](https://togithub.com/dmlc/xgboost/issues/8761), [#​8760](https://togithub.com/dmlc/xgboost/issues/8760), [#​8758](https://togithub.com/dmlc/xgboost/issues/8758), [#​8750](https://togithub.com/dmlc/xgboost/issues/8750)) ##### L1 and Quantile regression now supports learning rate Both objectives use adaptive trees due to the lack of proper Hessian values. In the new version, XGBoost can scale the leaf value with the learning rate accordingly. ([#​8866](https://togithub.com/dmlc/xgboost/issues/8866)) ##### Export cut value Using the Python or the C package, users can export the quantile values (not to be confused with quantile regression) used for the `hist` tree method. ([#​9356](https://togithub.com/dmlc/xgboost/issues/9356)) ##### column-based split and federated learning We made progress on column-based split for federated learning. In 2.0, both `approx`, `hist`, and `hist` with vector leaf can work with column-based data split, along with support for vertical federated learning. Work on GPU support is still on-going, stay tuned. ([#​8576](https://togithub.com/dmlc/xgboost/issues/8576), [#​8468](https://togithub.com/dmlc/xgboost/issues/8468), [#​8442](https://togithub.com/dmlc/xgboost/issues/8442), [#​8847](https://togithub.com/dmlc/xgboost/issues/8847), [#​8811](https://togithub.com/dmlc/xgboost/issues/8811), [#​8985](https://togithub.com/dmlc/xgboost/issues/8985), [#​8623](https://togithub.com/dmlc/xgboost/issues/8623), [#​8568](https://togithub.com/dmlc/xgboost/issues/8568), [#​8828](https://togithub.com/dmlc/xgboost/issues/8828), [#​8932](https://togithub.com/dmlc/xgboost/issues/8932), [#​9081](https://togithub.com/dmlc/xgboost/issues/9081), [#​9102](https://togithub.com/dmlc/xgboost/issues/9102), [#​9103](https://togithub.com/dmlc/xgboost/issues/9103), [#​9124](https://togithub.com/dmlc/xgboost/issues/9124), [#​9120](https://togithub.com/dmlc/xgboost/issues/9120), [#​9367](https://togithub.com/dmlc/xgboost/issues/9367), [#​9370](https://togithub.com/dmlc/xgboost/issues/9370), [#​9343](https://togithub.com/dmlc/xgboost/issues/9343), [#​9171](https://togithub.com/dmlc/xgboost/issues/9171), [#​9346](https://togithub.com/dmlc/xgboost/issues/9346), [#​9270](https://togithub.com/dmlc/xgboost/issues/9270), [#​9244](https://togithub.com/dmlc/xgboost/issues/9244), [#​8494](https://togithub.com/dmlc/xgboost/issues/8494), [#​8434](https://togithub.com/dmlc/xgboost/issues/8434), [#​8742](https://togithub.com/dmlc/xgboost/issues/8742), [#​8804](https://togithub.com/dmlc/xgboost/issues/8804), [#​8710](https://togithub.com/dmlc/xgboost/issues/8710), [#​8676](https://togithub.com/dmlc/xgboost/issues/8676), [#​9020](https://togithub.com/dmlc/xgboost/issues/9020), [#​9002](https://togithub.com/dmlc/xgboost/issues/9002), [#​9058](https://togithub.com/dmlc/xgboost/issues/9058), [#​9037](https://togithub.com/dmlc/xgboost/issues/9037), [#​9018](https://togithub.com/dmlc/xgboost/issues/9018), [#​9295](https://togithub.com/dmlc/xgboost/issues/9295), [#​9006](https://togithub.com/dmlc/xgboost/issues/9006), [#​9300](https://togithub.com/dmlc/xgboost/issues/9300), [#​8765](https://togithub.com/dmlc/xgboost/issues/8765), [#​9365](https://togithub.com/dmlc/xgboost/issues/9365), [#​9060](https://togithub.com/dmlc/xgboost/issues/9060)) ##### PySpark After the initial introduction of the PySpark interface, it has gained some new features and optimizations in 2.0. - GPU-based prediction. ([#​9292](https://togithub.com/dmlc/xgboost/issues/9292), [#​9542](https://togithub.com/dmlc/xgboost/issues/9542)) - Optimization for data initialization by avoiding the stack operation. ([#​9088](https://togithub.com/dmlc/xgboost/issues/9088)) - Support predict feature contribution. ([#​8633](https://togithub.com/dmlc/xgboost/issues/8633)) - Python typing support. ([#​9156](https://togithub.com/dmlc/xgboost/issues/9156), [#​9172](https://togithub.com/dmlc/xgboost/issues/9172), [#​9079](https://togithub.com/dmlc/xgboost/issues/9079), [#​8375](https://togithub.com/dmlc/xgboost/issues/8375)) - `use_gpu` is deprecated. The `device` parameter is preferred. - Update eval_metric validation to support list of strings ([#​8826](https://togithub.com/dmlc/xgboost/issues/8826)) - Improved logs for training ([#​9449](https://togithub.com/dmlc/xgboost/issues/9449)) - Maintenance, including refactoring and document updates ([#​8324](https://togithub.com/dmlc/xgboost/issues/8324), [#​8465](https://togithub.com/dmlc/xgboost/issues/8465), [#​8605](https://togithub.com/dmlc/xgboost/issues/8605), [#​9202](https://togithub.com/dmlc/xgboost/issues/9202), [#​9460](https://togithub.com/dmlc/xgboost/issues/9460), [#​9302](https://togithub.com/dmlc/xgboost/issues/9302), [#​8385](https://togithub.com/dmlc/xgboost/issues/8385), [#​8630](https://togithub.com/dmlc/xgboost/issues/8630), [#​8525](https://togithub.com/dmlc/xgboost/issues/8525), [#​8496](https://togithub.com/dmlc/xgboost/issues/8496)) - Fix for GPU setup. ([#​9495](https://togithub.com/dmlc/xgboost/issues/9495)) ##### Other General New Features Here's a list of new features that don't have their own section and yet are general to all language bindings. - Use array interface for CSC matrix. This helps XGBoost to use a consistent number of threads and align the interface of the CSC matrix with other interfaces. In addition, memory usage is likely to decrease with CSC input thanks to on-the-fly type conversion. ([#​8672](https://togithub.com/dmlc/xgboost/issues/8672)) - CUDA compute 90 is now part of the default build.. ([#​9397](https://togithub.com/dmlc/xgboost/issues/9397)) ##### Other General Optimization These optimizations are general to all language bindings. For language-specific optimization, please visit the corresponding sections. - Performance for input with `array_interface` on CPU (like `numpy`) is significantly improved. ([#​9090](https://togithub.com/dmlc/xgboost/issues/9090)) - Some optimization with CUDA for data initialization. ([#​9199](https://togithub.com/dmlc/xgboost/issues/9199), [#​9209](https://togithub.com/dmlc/xgboost/issues/9209), [#​9144](https://togithub.com/dmlc/xgboost/issues/9144)) - Use the latest thrust policy to prevent synchronizing GPU devices. ([#​9212](https://togithub.com/dmlc/xgboost/issues/9212)) - XGBoost now uses a per-thread CUDA stream, which prevents synchronization with other streams. ([#​9416](https://togithub.com/dmlc/xgboost/issues/9416), [#​9396](https://togithub.com/dmlc/xgboost/issues/9396), [#​9413](https://togithub.com/dmlc/xgboost/issues/9413)) ##### Notable breaking change Other than the aforementioned change with the `device` parameter, here's a list of breaking changes affecting all packages. - Users must specify the format for text input ([#​9077](https://togithub.com/dmlc/xgboost/issues/9077)). However, we suggest using third-party data structures such as `numpy.ndarray` instead of relying on text inputs. See [https://github.com/dmlc/xgboost/issues/9472](https://togithub.com/dmlc/xgboost/issues/9472) for more info. ##### Notable bug fixes Some noteworthy bug fixes that are not related to specific language bindings are listed in this section. - Some language environments use a different thread to perform garbage collection, which breaks the thread-local cache used in XGBoost. XGBoost 2.0 implements a new thread-safe cache using a light weight lock to replace the thread-local cache. ([#​8851](https://togithub.com/dmlc/xgboost/issues/8851)) - Fix model IO by clearing the prediction cache. ([#​8904](https://togithub.com/dmlc/xgboost/issues/8904)) - `inf` is checked during data construction. ([#​8911](https://togithub.com/dmlc/xgboost/issues/8911)) - Preserve order of saved updaters configuration. Usually, this is not an issue unless the `updater` parameter is used instead of the `tree_method` parameter ([#​9355](https://togithub.com/dmlc/xgboost/issues/9355)) - Fix GPU memory allocation issue with categorical splits. ([#​9529](https://togithub.com/dmlc/xgboost/issues/9529)) - Handle escape sequence like `\t\n` in feature names for JSON model dump. ([#​9474](https://togithub.com/dmlc/xgboost/issues/9474)) - Normalize file path for model IO and text input. This handles short paths on Windows and paths that contain `~` on Unix ([#​9463](https://togithub.com/dmlc/xgboost/issues/9463)). In addition, all path inputs are required to be encoded in UTF-8 ([#​9448](https://togithub.com/dmlc/xgboost/issues/9448), [#​9443](https://togithub.com/dmlc/xgboost/issues/9443)) - Fix integer overflow on H100. ([#​9380](https://togithub.com/dmlc/xgboost/issues/9380)) - Fix weighted sketching on GPU with categorical features. ([#​9341](https://togithub.com/dmlc/xgboost/issues/9341)) - Fix metric serialization. The bug might cause some of the metrics to be dropped during evaluation. ([#​9405](https://togithub.com/dmlc/xgboost/issues/9405)) - Fixes compilation errors on MSVC x86 targets ([#​8823](https://togithub.com/dmlc/xgboost/issues/8823)) - Pick up the dmlc-core fix for the CSV parser. ([#​8897](https://togithub.com/dmlc/xgboost/issues/8897)) ##### Documentation Aside from documents for new features, we have many smaller updates to improve user experience, from troubleshooting guides to typo fixes. - Explain CPU/GPU interop. ([#​8450](https://togithub.com/dmlc/xgboost/issues/8450)) - Guide to troubleshoot NCCL errors. ([#​8943](https://togithub.com/dmlc/xgboost/issues/8943), [#​9206](https://togithub.com/dmlc/xgboost/issues/9206)) - Add a note for rabit port selection. ([#​8879](https://togithub.com/dmlc/xgboost/issues/8879)) - How to build the docs using conda ([#​9276](https://togithub.com/dmlc/xgboost/issues/9276)) - Explain how to obtain reproducible results on distributed systems. ([#​8903](https://togithub.com/dmlc/xgboost/issues/8903)) - Fixes and small updates to document and demonstration scripts. ([#​8626](https://togithub.com/dmlc/xgboost/issues/8626), [#​8436](https://togithub.com/dmlc/xgboost/issues/8436), [#​8995](https://togithub.com/dmlc/xgboost/issues/8995), [#​8907](https://togithub.com/dmlc/xgboost/issues/8907), [#​8923](https://togithub.com/dmlc/xgboost/issues/8923), [#​8926](https://togithub.com/dmlc/xgboost/issues/8926), [#​9358](https://togithub.com/dmlc/xgboost/issues/9358), [#​9232](https://togithub.com/dmlc/xgboost/issues/9232), [#​9201](https://togithub.com/dmlc/xgboost/issues/9201), [#​9469](https://togithub.com/dmlc/xgboost/issues/9469), [#​9462](https://togithub.com/dmlc/xgboost/issues/9462), [#​9458](https://togithub.com/dmlc/xgboost/issues/9458), [#​8543](https://togithub.com/dmlc/xgboost/issues/8543), [#​8597](https://togithub.com/dmlc/xgboost/issues

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.



This PR was generated by Mend Renovate. View the repository job log.

renovate[bot] commented 1 year ago

⚠ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

The artifact failure details are included below:

File name: poetry.lock
Creating virtualenv pytorch-tabnet-8eWQWSqX-py3.12 in /home/ubuntu/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

The current project's Python requirement (>=3.7) is not compatible with some of the required packages Python requirement:
  - xgboost requires Python >=3.8, so it will not be satisfied for Python >=3.7,<3.8

Because pytorch-tabnet depends on xgboost (2.0.3) which requires Python >=3.8, version solving failed.

  • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties

    For xgboost, a possible solution would be to set the `python` property to ">=3.8"

    https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
    https://python-poetry.org/docs/dependency-specification/#using-environment-markers
renovate[bot] commented 3 months ago

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

The artifact failure details are included below:

File name: poetry.lock
Creating virtualenv pytorch-tabnet-8eWQWSqX-py3.12 in /home/ubuntu/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

The current project's Python requirement (>=3.7) is not compatible with some of the required packages Python requirement:
  - xgboost requires Python >=3.8, so it will not be satisfied for Python >=3.7,<3.8

Because pytorch-tabnet depends on xgboost (2.1.1) which requires Python >=3.8, version solving failed.

  • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties

    For xgboost, a possible solution would be to set the `python` property to ">=3.8"

    https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
    https://python-poetry.org/docs/dependency-specification/#using-environment-markers