dmlc / xgboost

Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
https://xgboost.readthedocs.io/en/stable/
Apache License 2.0
26.19k stars 8.71k forks source link

Federated Learning not running with 2.1.0 and 2.1.1 #10716

Closed gubertoli closed 2 months ago

gubertoli commented 2 months ago

This issue is a follow-up of #10500 and PR #10503.

Output with XGBoost 2.1.1:

$ ./runtests-federated.sh 5
[14:30:19] Insecure federated server listening on 0.0.0.0:5, world size 9091

E0818 14:30:19.858728712   38570 chttp2_server.cc:1053]      UNKNOWN: No address added out of total 1 resolved for '0.0.0.0:5'
{ 
    created_time: "2024-08-18T14:30:19.858078294+02:00",
    file_line: 963, 
    file: "/grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc",
    children: [
        UNKNOWN: Failed to add any wildcard listeners 
        {
            file: "/grpc/src/core/lib/iomgr/tcp_server_posix.cc", 
            file_line: 363, 
            created_time: "2024-08-18T14:30:19.858044481+02:00", 
            children: [
                UNKNOWN: Unable to configure socket 
                {
                    file: "/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc", 
                    file_line: 220, 
                    created_time: "2024-08-18T14:30:19.85799285+02:00", 
                    fd: 8, 
                    children: [
                        UNKNOWN: Permission denied 
                        {
                            file: "/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc", 
                            file_line: 194, 
                            created_time: "2024-08-18T14:30:19.857961088+02:00", 
                            errno: 13, 
                            os_error: "Permission denied", 
                            syscall: "bind"
                        }
                    ]
                }, 
                UNKNOWN: Unable to configure socket 
                {
                    fd: 8, 
                    created_time: "2024-08-18T14:30:19.858033529+02:00", 
                    file_line: 220, 
                    file: "/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc", 
                    children: [
                        UNKNOWN: Permission denied 
                        {
                            file: "/grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc", 
                            file_line: 194, 
                            created_time: "2024-08-18T14:30:19.858027382+02:00", 
                            errno: 13, 
                            os_error: "Permission denied", 
                            syscall: "bind"
                        }
                    ]
                }
            ]
        }
    ]
}

[14:30:20] Rank 0
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "~/refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File "~/.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[14:30:20] Rank 0
Process Process-3:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "~/refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File "~/.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[14:30:20] Rank 0
Process Process-4:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "~/refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File "~/.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[14:30:20] Rank 0
Process Process-5:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "~/refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File "~/.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[14:30:20] Rank 0
Process Process-6:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "~/refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File "~/.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError

For reference, the adapted test_federated.py is:

import multiprocessing
import sys
import time

import xgboost as xgb
import xgboost.federated

SERVER_KEY = 'server-key.pem'
SERVER_CERT = 'server-cert.pem'
CLIENT_KEY = 'client-key.pem'
CLIENT_CERT = 'client-cert.pem'

def run_server(port: int, world_size: int, with_ssl: bool) -> None:
    if with_ssl:
        xgboost.federated.run_federated_server(port, world_size, SERVER_KEY, SERVER_CERT,
                                               CLIENT_CERT)
    else:
        xgboost.federated.run_federated_server(port, world_size)

def run_worker(port: int, world_size: int, rank: int, with_ssl: bool, with_gpu: bool) -> None:
    communicator_env = {
        'xgboost_communicator': 'federated',
        'federated_server_address': f'localhost:{port}',
        'federated_world_size': world_size,
        'federated_rank': rank
    }
    if with_ssl:
        communicator_env['federated_server_cert'] = SERVER_CERT
        communicator_env['federated_client_key'] = CLIENT_KEY
        communicator_env['federated_client_cert'] = CLIENT_CERT

    # Always call this before using distributed module
    with xgb.collective.CommunicatorContext(**communicator_env):
        # Load file, file will not be sharded in federated mode.
        dtrain = xgb.DMatrix('agaricus.txt.train-%02d?format=libsvm' % rank)
        dtest = xgb.DMatrix('agaricus.txt.test-%02d?format=libsvm' % rank)

        # Specify parameters via map, definition are same as c++ version
        param = {'max_depth': 2, 'eta': 1, 'objective': 'binary:logistic'}
        if with_gpu:
            param['tree_method'] = 'hist'
            param['device'] = f"cuda:{rank}"

        # Specify validations set to watch performance
        watchlist = [(dtest, 'eval'), (dtrain, 'train')]
        num_round = 20

        # Run training, all the features in training API is available.
        bst = xgb.train(param, dtrain, num_round, evals=watchlist,
                        early_stopping_rounds=2)

        # Save the model, only ask process 0 to save the model.
        if xgb.collective.get_rank() == 0:
            bst.save_model("test.model.json")
            xgb.collective.communicator_print("Finished training\n")

def run_federated(with_ssl: bool = True, with_gpu: bool = False) -> None:
    port = 9091
    world_size = int(sys.argv[1])

    server = multiprocessing.Process(target=run_server, args=(port, world_size, with_ssl))
    server.start()
    time.sleep(1)
    if not server.is_alive():
        raise Exception("Error starting Federated Learning server")

    workers = []
    for rank in range(world_size):
        worker = multiprocessing.Process(target=run_worker,
                                         args=(port, world_size, rank, with_ssl, with_gpu))
        workers.append(worker)
        worker.start()
    for worker in workers:
        worker.join()
    server.terminate()

if __name__ == '__main__':
    run_federated(with_ssl=False, with_gpu=False)

And the adapted shell script runtests-federated.sh:

#!/bin/bash

# world_size=$(nvidia-smi -L | wc -l)
world_size=$1

# Split train and test files manually to simulate a federated environment.
split -n l/"${world_size}" -d agaricus.txt.train agaricus.txt.train-
split -n l/"${world_size}" -d agaricus.txt.test agaricus.txt.test-

python test_federated.py "${world_size}"

🚧 ⌛ Interim solution: Downgrade to XGBoost 2.0.0

trivialfis commented 2 months ago

The argument for run_federated_server is n_workers, port instead of port, n_workers. This is to be consistent with the new FederatedTracker and RabitTracker.

gubertoli commented 2 months ago

The argument for run_federated_server is n_workers, port instead of port, n_workers. This is to be consistent with the new FederatedTracker and RabitTracker.

I did the change on run_federated_server with n_workers first and then port as input arguments:

    if with_ssl:
        xgboost.federated.run_federated_server(world_size, port, SERVER_KEY, SERVER_CERT,
                                               CLIENT_CERT)
    else:
        xgboost.federated.run_federated_server(world_size, port)

This is the current output error (tested on 2.1.0 and 2.1.1):

./runtests-federated.sh 5
[19:26:33] Insecure federated server listening on 0.0.0.0:9091, world size 5
[19:26:34] Rank 0
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File ".../fl-xgb-nids/refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File ".../.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[19:26:34] Rank 0
Process Process-3:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File ".../refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File ".../.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[19:26:34] Rank 0
Process Process-4:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File ".../refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File ".../.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[19:26:34] Rank 0
Process Process-5:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File ".../refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File ".../.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
[19:26:34] Rank 0
Process Process-6:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File ".../refs/test_federated.py", line 36, in run_worker
    with xgb.collective.CommunicatorContext(**communicator_env):
  File ".../.venv/lib/python3.10/site-packages/xgboost/collective.py", line 280, in __enter__
    assert is_distributed()
AssertionError
trivialfis commented 2 months ago

Hi, it's dmlc_communicator instead of xgboost_communicator. Apologies for the confusion, the document is still sparse at the moment as we are still working on the feature. https://github.com/dmlc/xgboost/blob/master/python-package/xgboost/testing/federated.py can be a starting point.

gubertoli commented 2 months ago

Hi, it's dmlc_communicator instead of xgboost_communicator. Apologies for the confusion, the document is still sparse at the moment as we are still working on the feature. https://github.com/dmlc/xgboost/blob/master/python-package/xgboost/testing/federated.py can be a starting point.

Using dmlc_communicator instead of xgboost_communicator solved the issue. Thank you for the help! I will follow the starting point now on.