ray-project / ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
Apache License 2.0
32k stars 5.45k forks source link

[train][python3.11] Ray train distributed example error #31359

Open rickyyx opened 1 year ago

rickyyx commented 1 year ago

What happened + What you expected to happen

Running the torch train example on https://docs.ray.io/en/latest/ray-overview/index.html#ray-ai-runtime-quick-start raises error

File ~/anaconda3/envs/dev-py311/lib/python3.11/dataclasses.py:816, in _get_field(cls, a_name, a_type, default_kw_only)
    812 # For real fields, disallow mutable defaults.  Use unhashable as a proxy
    813 # indicator for mutability.  Read the __hash__ attribute from the class,
    814 # not the instance.
    815 if f._field_type is _FIELD and f.default.__class__.__hash__ is None:
--> 816     raise ValueError(f'mutable default {type(f.default)} for field '
    817                      f'{f.name} is not allowed: use default_factory')
    819 return f

ValueError: mutable default <class 'torch.distributed._shard.sharded_tensor.metadata.TensorProperties'> for field tensor_properties is not allowed: use default_factory

Versions / Dependencies

wget <ray p3.11 wheel url> e.g. https://buildkite.com/ray-project/oss-ci-build-pr/builds/8313#01855a61-6528-42d1-b55f-ae0e48ce236a pip install -U ray-3.0.0.dev0-cp311-cp311-manylinux2014_x86_64.whl

Reproduction script

From Train: Distributed Model Training at https://docs.ray.io/en/latest/ray-overview/index.html#ray-ai-runtime-quick-start

import torch
import torch.nn as nn

num_samples = 20
input_size = 10
layer_size = 15
output_size = 5

class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.layer1 = nn.Linear(input_size, layer_size)
        self.relu = nn.ReLU()
        self.layer2 = nn.Linear(layer_size, output_size)

    def forward(self, input):
        return self.layer2(self.relu(self.layer1(input)))

# In this example we use a randomly generated dataset.
input = torch.randn(num_samples, input_size)
labels = torch.randn(num_samples, output_size)

import torch.optim as optim

def train_func():
    num_epochs = 3
    model = NeuralNetwork()
    loss_fn = nn.MSELoss()
    optimizer = optim.SGD(model.parameters(), lr=0.1)

    for epoch in range(num_epochs):
        output = model(input)
        loss = loss_fn(output, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        print(f"epoch: {epoch}, loss: {loss.item()}")

train_func()

from ray import train

def train_func_distributed():
    num_epochs = 3
    model = NeuralNetwork()
    model = train.torch.prepare_model(model)
    loss_fn = nn.MSELoss()
    optimizer = optim.SGD(model.parameters(), lr=0.1)

    for epoch in range(num_epochs):
        output = model(input)
        loss = loss_fn(output, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        print(f"epoch: {epoch}, loss: {loss.item()}")

from ray.train.torch import TorchTrainer
from ray.air.config import ScalingConfig

# For GPU Training, set `use_gpu` to True.
use_gpu = False

trainer = TorchTrainer(
    train_func_distributed,
    scaling_config=ScalingConfig(
        num_workers=4, use_gpu=use_gpu)
)

results = trainer.fit()

Issue Severity

None

rickyyx commented 1 year ago

Seems to be a torch library issue to me though.

https://github.com/pytorch/pytorch/blob/master/torch/distributed/_shard/sharded_tensor/metadata.py#L70-L82

feedanal commented 1 year ago

One more data point, ray, version 3.0.0.dev0 installed from https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-3.0.0.dev0-cp311-cp311-manylinux2014_x86_64.whl:

Traceback (most recent call last):
  File "/home/ubuntu/lib/python3.11/site-packages/ray/data/_internal/execution/interfaces.py", line 158, in <module>
    @dataclass
     ^^^^^^^^^
  File "/usr/lib/python3.11/dataclasses.py", line 1220, in dataclass
    return wrap(cls)
           ^^^^^^^^^
  File "/usr/lib/python3.11/dataclasses.py", line 1210, in wrap
    return _process_class(cls, init, repr, eq, order, unsafe_hash,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
    cls_fields.append(_get_field(cls, name, type, kw_only))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
    raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'ray.data._internal.execution.interfaces.ExecutionResources'> for field resource_limits is not allowed: use default_factory