ray-project / ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
Apache License 2.0
33.01k stars 5.59k forks source link

[Bug] Raylet keeps crashing with segmentation fault #23410

Closed mlguruz closed 2 years ago

mlguruz commented 2 years ago

Search before asking

Ray Component

Ray Core

Issue Severity

High: It blocks me to complete my task.

What happened + What you expected to happen

I'm trying to run ray on local on-prem cluster with docker. After fighting through lots of other issues, I'm now at a stage where I'm able to see my head and worker nodes. However, I could not run jobs on the worker node, and one issue I keep getting is that I'm seeing constant failures of raylets on worker node b/c of segmentation faults. I put the step-by-step details below. I tried with and without docker, and this doesn't seem to be b/c of docker.
Because the log from c++ side isn't very helpful, it's very hard for me to figure out where these segfaults are coming from... Really appreciate your help!

Versions / Dependencies

Ray == 1.11.0 Python == 3.7.7 OS = ubuntu20.04

Reproduction script

Run ray up cluster.yaml, and then init ray with

import ray
ray.init(address='ray://192.168.1.155:10001')

The cluster.yaml is the following:

# A unique identifier for the head node and workers of this cluster.
cluster_name: 'default'

## Running Ray in Docker images is optional (this docker section can be commented out).
## This executes all commands on all nodes in the docker container,
## and opens all the necessary ports to support the Ray cluster.
## Empty string means disabled. Assumes Docker is installed.
#docker:
##    image: "rayproject/ray-ml:latest-gpu" # You can change this to latest-cpu if you don't need GPU support and want a faster startup
#    image: rayproject/ray:latest-gpu   # use this one if you don't need ML dependencies, it's faster to pull
##    image: rayproject/ray:latest-cpu   # use this one if you don't need ML dependencies, it's faster to pull
#    container_name: "ray_container"
#    # If true, pulls latest version of image. Otherwise, `docker run` will only pull the image
#    # if no cached version is present.
#    pull_before_run: False
#    run_options:   # Extra options to pass into "docker run"
#        - --ulimit nofile=65536:65536
#        - --shm-size=30GB
##        - --network host

provider:
    type: local
    head_ip: '192.168.1.155'
#    head_ip: localhost
###    head_ip: 'localhost'
####    # You may need to supply a public ip for the head node if you need
####    # to run `ray up` from outside of the Ray cluster's network
####    # (e.g. the cluster is in an AWS VPC and you're starting ray from your laptop)
####    # This is useful when debugging the local node provider with cloud VMs.
####    # external_head_ip: YOUR_HEAD_PUBLIC_IP
    worker_ips: ['192.168.1.153',]
##     Optional when running automatic cluster management on prem. If you use a coordinator server,
##     then you can launch multiple autoscaling clusters on the same set of machines, and the coordinator
##     will assign individual nodes to clusters as needed.
##        coordinator_address: "<host>:<port>"
#    coordinator_address: "192.168.1.155:8888"

# How Ray will authenticate with newly launched nodes.
auth:
    ssh_user: 'user'
    # You can comment out `ssh_private_key` if the following machines don't need a private key for SSH access to the Ray
    # cluster:
    #   (1) The machine on which `ray up` is executed.
    #   (2) The head node of the Ray cluster.
    #
    # The machine that runs ray up executes SSH commands to set up the Ray head node. The Ray head node subsequently
    # executes SSH commands to set up the Ray worker nodes. When you run ray up, ssh credentials sitting on the ray up
    # machine are copied to the head node -- internally, the ssh key is added to the list of file mounts to rsync to head node.
    ssh_private_key: ~/.ssh/id_rsa

# The minimum number of workers nodes to launch in addition to the head
# node. This number should be >= 0.
# Typically, min_workers == max_workers == len(worker_ips).
min_workers: 1

#initial_workers: 1

# The maximum number of workers nodes to launch in addition to the head node.
# This takes precedence over min_workers.
# Typically, min_workers == max_workers == len(worker_ips).
max_workers: 1
# The default behavior for manually managed clusters is
# min_workers == max_workers == len(worker_ips),
# meaning that Ray is started on all available nodes of the cluster.
# For automatically managed clusters, max_workers is required and min_workers defaults to 0.

# The autoscaler will scale up the cluster faster with higher upscaling speed.
# E.g., if the task requires adding more nodes then autoscaler will gradually
# scale up the cluster in chunks of upscaling_speed*currently_running_nodes.
# This number should be > 0.
upscaling_speed: 1.0

idle_timeout_minutes: 5

# Files or directories to copy to the head and worker nodes. The format is a
# dictionary from REMOTE_PATH: LOCAL_PATH. E.g. you could save your conda env to an environment.yaml file, mount
# that directory to all nodes and call `conda -n my_env -f /path1/on/remote/machine/environment.yaml`. In this
# example paths on all nodes must be the same (so that conda can be called always with the same argument)
file_mounts: {
#    "~/.ssh": "~/.ssh/backup_202203",
#    "/path2/on/remote/machine": "/path2/on/local/machine",
}

# Files or directories to copy from the head node to the worker nodes. The format is a
# list of paths. The same path on the head node will be copied to the worker node.
# This behavior is a subset of the file_mounts behavior. In the vast majority of cases
# you should just use file_mounts. Only use this if you know what you're doing!
cluster_synced_files: [
#    "~/.ssh": "~/.ssh",
]

# Whether changes to directories in file_mounts or cluster_synced_files in the head node
# should sync to the worker node continuously
file_mounts_sync_continuously: False

# Patterns for files to exclude when running rsync up or rsync down
rsync_exclude:
    - "**/.git"
    - "**/.git/**"

# Pattern files to use for filtering out files when running rsync up or rsync down. The file is searched for
# in the source directory and recursively through all subdirectories. For example, if .gitignore is provided
# as a value, the behavior will match git's behavior for finding and using .gitignore files.
rsync_filter:
    - ".gitignore"

# List of commands that will be run before `setup_commands`. If docker is
# enabled, these commands will run outside the container and before docker
# is setup.
initialization_commands: []

# List of shell commands to run to set up each nodes.
setup_commands: [
#    'ssh-keygen && cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys'
#    'sudo chown ray:users /tmp/cluster-default.state'
]
    # If we have e.g. conda dependencies stored in "/path1/on/local/machine/environment.yaml", we can prepare the
    # work environment on each worker by:
    #   1. making sure each worker has access to this file i.e. see the `file_mounts` section
    #   2. adding a command here that creates a new conda environment on each node or if the environment already exists,
    #     it updates it:
    #      conda env create -q -n my_venv -f /path1/on/local/machine/environment.yaml || conda env update -q -n my_venv -f /path1/on/local/machine/environment.yaml
    #
    # Ray developers:
    # you probably want to create a Docker image that
    # has your Ray repo pre-cloned. Then, you can replace the pip installs
    # below with a git checkout <your_sha> (and possibly a recompile).
    # To run the nightly version of ray (as opposed to the latest), either use a rayproject docker image
    # that has the "nightly" (e.g. "rayproject/ray-ml:nightly-gpu") or uncomment the following line:
    # - pip install -U "ray[default] @ https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-2.0.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl"

# Custom commands that will be run on the head node after common setup.
head_setup_commands: [
#    'echo "export AUTOSCALER_HEARTBEAT_TIMEOUT_S=120" >> ~/.bashrc'
#     'ssh-keygen && cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys'
#    'sudo chown ray:users /tmp/cluster-default.state'

]

# Custom commands that will be run on worker nodes after common setup.
worker_setup_commands: [
#    'echo "export AUTOSCALER_HEARTBEAT_TIMEOUT_S=120" >> ~/.bashrc'
]

# Command to start ray on the head node. You don't need to change this.
head_start_ray_commands:
  # If we have e.g. conda dependencies, we could create on each node a conda environment (see `setup_commands` section).
  # In that case we'd have to activate that env on each node before running `ray`:
  # - conda activate my_venv && ray stop
  # - conda activate my_venv && ulimit -c unlimited && ray start --head --port=6379 --autoscaling-config=~/ray_bootstrap_config.yaml
    - ray stop
    - ulimit -c unlimited && ray start --head --port=6379 --autoscaling-config=~/ray_bootstrap_config.yaml

# Command to start ray on worker nodes. You don't need to change this.
worker_start_ray_commands:
  # If we have e.g. conda dependencies, we could create on each node a conda environment (see `setup_commands` section).
  # In that case we'd have to activate that env on each node before running `ray`:
#    - ray stop
  # - ray start --address=$RAY_HEAD_IP:6379
    - ray stop
    - ray start --address=$RAY_HEAD_IP:6379 #--metrics-export-port=6666 --dashboard-agent-grpc-port=7777

And then, I start to see seg fault logs coming from my worker node on ip 192.168.1.153.

Examples are like below:

(raylet, ip=192.168.1.153) [2022-03-22 15:37:44,626 E 3819499 3819539] (raylet) logging.cc:321: *** SIGSEGV received at time=1647977864 on cpu 0 ***                                                             
(raylet, ip=192.168.1.153) [2022-03-22 15:37:44,626 E 3819499 3819539] (raylet) logging.cc:321: PC: @     0x7faf40ff1643  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) [2022-03-22 15:37:44,626 E 3819499 3819539] (raylet) logging.cc:321:     @     0x7faf413c23c0  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) *** SIGSEGV received at time=1647978005 on cpu 0 ***                                                                                                                                  
(raylet, ip=192.168.1.153) PC: @     0x7fac18159643  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153)     @     0x7fac1852a3c0  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153) [2022-03-22 15:40:05,158 E 3823143 3823183] (raylet) logging.cc:321: *** SIGSEGV received at time=1647978005 on cpu 0 ***                                                             
(raylet, ip=192.168.1.153) [2022-03-22 15:40:05,158 E 3823143 3823183] (raylet) logging.cc:321: PC: @     0x7fac18159643  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) [2022-03-22 15:40:05,158 E 3823143 3823183] (raylet) logging.cc:321:     @     0x7fac1852a3c0  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) *** SIGSEGV received at time=1647977719 on cpu 1 ***                                                                                                                                  
(raylet, ip=192.168.1.153) PC: @     0x7f9a75823643  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153)     @     0x7f9a75bf43c0  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153) [2022-03-22 15:35:19,100 E 3815636 3815676] (raylet) logging.cc:321: *** SIGSEGV received at time=1647977719 on cpu 1 ***                                                             
(raylet, ip=192.168.1.153) [2022-03-22 15:35:19,100 E 3815636 3815676] (raylet) logging.cc:321: PC: @     0x7f9a75823643  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) [2022-03-22 15:35:19,100 E 3815636 3815676] (raylet) logging.cc:321:     @     0x7f9a75bf43c0  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) *** SIGSEGV received at time=1647977759 on cpu 5 ***                                                                                                                                  
(raylet, ip=192.168.1.153) PC: @     0x7f209134d643  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153)     @     0x7f209171e3c0  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153) [2022-03-22 15:35:59,207 E 3816605 3816669] (raylet) logging.cc:321: *** SIGSEGV received at time=1647977759 on cpu 5 ***                                                             
(raylet, ip=192.168.1.153) [2022-03-22 15:35:59,208 E 3816605 3816669] (raylet) logging.cc:321: PC: @     0x7f209134d643  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) [2022-03-22 15:35:59,208 E 3816605 3816669] (raylet) logging.cc:321:     @     0x7f209171e3c0  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) *** SIGSEGV received at time=1647977899 on cpu 4 ***                                                                                                                                  
(raylet, ip=192.168.1.153) PC: @     0x7f1605c60643  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153)     @     0x7f16060313c0  (unknown)  (unknown)                                                                                                                                        
(raylet, ip=192.168.1.153) [2022-03-22 15:38:19,729 E 3820409 3820449] (raylet) logging.cc:321: *** SIGSEGV received at time=1647977899 on cpu 4 ***                                                             
(raylet, ip=192.168.1.153) [2022-03-22 15:38:19,729 E 3820409 3820449] (raylet) logging.cc:321: PC: @     0x7f1605c60643  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) [2022-03-22 15:38:19,729 E 3820409 3820449] (raylet) logging.cc:321:     @     0x7f16060313c0  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) *** SIGSEGV received at time=1647977829 on cpu 6 ***                                                                                                                                  
(raylet, ip=192.168.1.153) PC: @     0x7fb59dc9a643  (unknown)  (unknown)                               
(raylet, ip=192.168.1.153)     @     0x7fb59e06b3c0  (unknown)  (unknown)                               
(raylet, ip=192.168.1.153) [2022-03-22 15:37:09,648 E 3818581 3818635] (raylet) logging.cc:321: *** SIGSEGV received at time=1647977829 on cpu 6 ***                                                             
(raylet, ip=192.168.1.153) [2022-03-22 15:37:09,648 E 3818581 3818635] (raylet) logging.cc:321: PC: @     0x7fb59dc9a643  (unknown)  (unknown)                                                                   
(raylet, ip=192.168.1.153) [2022-03-22 15:37:09,649 E 3818581 3818635] (raylet) logging.cc:321:     @     0x7fb59e06b3c0  (unknown)  (unknown)                                                                   
(scheduler +6m29s) Restarting 1 nodes of type local.cluster.node (lost contact with raylet).

This is what I see from the log dir on worker node (i.e. ip = 192.158.1.153)

-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:40 raylet.10.err
-rw-rw-r-- 1 <user> <user> 15725 Mar 22 15:40 raylet.10.out
-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:40 raylet.11.err
-rw-rw-r-- 1 <user> <user> 15902 Mar 22 15:40 raylet.11.out
-rw-rw-r-- 1 <user> <user>     0 Mar 22 15:34 raylet.1.err
-rw-rw-r-- 1 <user> <user> 13790 Mar 22 15:34 raylet.1.out
-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:35 raylet.2.err
-rw-rw-r-- 1 <user> <user> 14316 Mar 22 15:35 raylet.2.out
-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:35 raylet.3.err
-rw-rw-r-- 1 <user> <user> 14476 Mar 22 15:35 raylet.3.out
-rw-rw-r-- 1 <user> <user>     0 Mar 22 15:36 raylet.4.err
-rw-rw-r-- 1 <user> <user> 14316 Mar 22 15:36 raylet.4.out
-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:37 raylet.5.err
-rw-rw-r-- 1 <user> <user> 14844 Mar 22 15:37 raylet.5.out
-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:37 raylet.6.err
-rw-rw-r-- 1 <user> <user> 15018 Mar 22 15:37 raylet.6.out
-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:38 raylet.7.err
-rw-rw-r-- 1 <user> <user> 15199 Mar 22 15:38 raylet.7.out
-rw-rw-r-- 1 <user> <user>     0 Mar 22 15:38 raylet.8.err
-rw-rw-r-- 1 <user> <user> 15003 Mar 22 15:38 raylet.8.out
-rw-rw-r-- 1 <user> <user>     0 Mar 22 15:38 raylet.9.err
-rw-rw-r-- 1 <user> <user> 15195 Mar 22 15:39 raylet.9.out
-rw-rw-r-- 1 <user> <user>   501 Mar 22 15:34 raylet.err
-rw-rw-r-- 1 <user> <user> 13622 Mar 22 15:34 raylet.out

The err logs are not very helpful. Examples:

*** SIGSEGV received at time=1647978005 on cpu 0 ***
PC: @     0x7fac18159643  (unknown)  (unknown)
    @     0x7fac1852a3c0  (unknown)  (unknown)
[2022-03-22 15:40:05,158 E 3823143 3823183] (raylet) logging.cc:321: *** SIGSEGV received at time=1647978005 on cpu 0 ***
[2022-03-22 15:40:05,158 E 3823143 3823183] (raylet) logging.cc:321: PC: @     0x7fac18159643  (unknown)  (unknown)
[2022-03-22 15:40:05,158 E 3823143 3823183] (raylet) logging.cc:321:     @     0x7fac1852a3c0  (unknown)  (unknown)

I tried this w/ and w/o docker, and I got the same errors.

Can someone please help? Thank you so much!

Anything else

No response

Are you willing to submit a PR?

scv119 commented 2 years ago

that's a bit odd. looks raylet is crash looping. can you share what's the content of raylet.*.out ?

scv119 commented 2 years ago

also you can add RAY_BACKEND_LOG_LEVEL=debug before ray start to show debug logs.

  - RAY_BACKEND_LOG_LEVEL=debug ray start --address=$RAY_HEAD_IP:6379 #--metrics-export-port=6666 --dashboard-agent-grpc-port=7777
mlguruz commented 2 years ago

Thanks! Let me incorporate all these and come back w/ what I see on my end.

mlguruz commented 2 years ago

also you can add RAY_BACKEND_LOG_LEVEL=debug before ray start to show debug logs.

  - RAY_BACKEND_LOG_LEVEL=debug ray start --address=$RAY_HEAD_IP:6379 #--metrics-export-port=6666 --dashboard-agent-grpc-port=7777

Hi Chen, Sorry for the late reply. After quite a bit of local debugging, I found out it was related to my firewall port issues. I was seeing those endless raylet restarting logs b/c the firewall on the head node was blocking some of the worker ports. Once I fixed it, I no longer saw the endless failure logs. I still occasionally saw raylet failure sometime, but it was kinda flaky and I couldn't replicate at the moment. The raylet logs, whether it be *out or *err, are not helpful for debugging this. Maybe you guys could improve that a little bit? I'm happy to close this btw.