hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.83k stars 1.95k forks source link

task restart No response #20406

Closed chenjpu closed 2 months ago

chenjpu commented 5 months ago

Nomad version

1.7.6 or main branch

Operating system and Environment details

CentOS Linux release 7.9.2009 (Core) Docker Engine - 24.0.1

Issue

Multiple attempts to restart the task showed no response

Apr 16 15:05:41 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:41.869+0800 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app type="Restart Signaled" msg="User requested task to restart" failed=false
Apr 16 15:05:41 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:41.942+0800 [INFO]  client.driver_mgr.docker: stopped container: container_id=d071491064b92094bdbbd65ef626e7ae8ec5d460258327ca5bc01b714cfa41f8 driver=docker
Apr 16 15:05:41 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:41.947+0800 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app type=Terminated msg="Exit Code: 2, Exit Message: \"Docker container exited with non-zero exit code: 2\"" failed=false
Apr 16 15:05:41 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:41.950+0800 [INFO]  client.driver_mgr.docker.docker_logger: plugin process exited: driver=docker plugin=/usr/bin/nomad id=20349
Apr 16 15:05:41 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:41.957+0800 [INFO]  client.alloc_runner.task_runner: restarting task: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app reason="" delay=0s
Apr 16 15:05:41 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:41.957+0800 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app type=Restarting msg="Task restarting in 0s" failed=false
Apr 16 15:05:41 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:41.998+0800 [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=09e567496e3fb7f8f4cdfe96d0d607acb31d1bfe51b3e749498276befad141ae
Apr 16 15:05:42 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:42.097+0800 [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=09e567496e3fb7f8f4cdfe96d0d607acb31d1bfe51b3e749498276befad141ae
Apr 16 15:05:42 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:42.129+0800 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app type=Started msg="Task started by client" failed=false
Apr 16 15:05:52 hgc-webserver-2 nomad[19547]: 2024-04-16T15:05:52.722+0800 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app type="Restart Signaled" msg="User requested task to restart" failed=false
Apr 16 15:08:20 hgc-webserver-2 nomad[19547]: 2024-04-16T15:08:20.374+0800 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app type="Restart Signaled" msg="User requested task to restart" failed=false
Apr 16 15:13:08 hgc-webserver-2 nomad[19547]: 2024-04-16T15:13:08.429+0800 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=0ea8fd38-d837-9c01-c2dc-aa6f876ee517 task=app type="Restart Signaled" msg="User requested task to restart" failed=false

image

Reproduction steps

Expected Result

Actual Result

Job file (if appropriate)

Nomad Server logs (if appropriate)

Nomad Client logs (if appropriate)

chenjpu commented 5 months ago

Service registration provider defaults to consul, the cluster environment does not rely on the consul environment, when configured as nomad, the problem does not appear

chenjpu commented 5 months ago

The following is the configuration of the service template, daprd task restart operation is ok, only when not set provider app task restart no response

job "xxxxx" {
  datacenters = ["dc1"]
  type        = "service"
  group "service" {
    task "app" {
      driver = "docker"
      config {
        image   = "alpine:3.19"
        command = "local/app"
      }
      service {
        name         = "${NOMAD_JOB_NAME}"
        port         = "app"
        address_mode = "host"
        provider     = "nomad" // Correct configuration
        check {
          name     = "health check"
          type     = "tcp"
          port     = "app"
          interval = "12s"
          timeout  = "6s"
          check_restart {
            limit           = 3
            grace           = "10s"
          }
        }
        check {
          name           = "ready check"
          type           = "http"
          port           = "http"
          path           = "/v1.0/healthz"
          interval       = "12s"
          timeout        = "6s"
          on_update      = "ignore"
        }
      }
      artifact {
        source = "..../app.tar.gz"
      }
    }

    task "daprd" {
      lifecycle {
        hook = "poststart"
        sidecar = true
      }
      driver = "docker"
      config {
        image   = "alpine:3.19"
        command = "local/daprd"
      }

      artifact {
        source = ".../daprd_min_linux_${attr.cpu.arch}.tar.gz"
      }

    }
  }
}
tgross commented 3 months ago

Hi @chenjpu! Apologies for the delay in responding to this. Let me verify I understand what you're saying here:

Is that right?

I would not have expected the workload to run at all with service.provider unset (defaulting to Consul) if there's no Consul in your environment. Nomad adds a constraint that requires Consul if you've got a Consul service in the jobspec.

chenjpu commented 3 months ago

provider = "" For a little long time, remember when this configuration was an empty string that caused an exception

tgross commented 3 months ago

Hi @chenjpu!

I think what I'm not making clear is that if you had provider = "" in your jobspec without Consul available, the job would not start at all. See this example jobspec:

jobspec ```hcl job "example" { group "group" { network { mode = "bridge" port "www" { to = 8001 } } service { name = "httpd-web" provider = "" port = "www" } task "task" { driver = "docker" config { image = "busybox:1" command = "httpd" args = ["-vv", "-f", "-p", "8001", "-h", "/local"] } resources { cpu = 50 memory = 50 } } } } ```

I get a scheduling error like the following:

$ nomad job plan example.nomad.hcl
+ Job: "example"
+ Task Group: "group" (1 create)
  + Task: "task" (forces create)

Scheduler dry-run:
- WARNING: Failed to place all allocations.
  Task Group "group" (failed to place 1 allocation):
    * Constraint "${attr.consul.version} semver >= 1.8.0": 1 nodes excluded by filter

Job Modify Index: 0
To submit the job with version verification run:

nomad job run -check-index 0 example.nomad.hcl

When running the job with the check-index flag, the job will only be run if the
job modify index given matches the server-side version. If the index has
changed, another user has modified the job and the plan's results are
potentially invalid.

However, I suspect the provider is irrelevant here and that there's something else going on.

We emit the "User requested task to restart" event just before we actually try to restart the task (ref lifecycle.go#L81-L82), because it can take a while for the task to actually shut down. We wait for "prekill" behaviors and the task itself before we return any errors to the caller.

So there might be something that's blocking the shutdown here.

Next steps to debug:

chenjpu commented 3 months ago

Hi @tgross I just reset provider= "", does prompt service 1 unplaced error. due to the environment in which the scene appears is production Environment, sorry to not recover the wrong scene,

tgross commented 3 months ago

@chenjpu ok I understand.

If this happens again, you can capture the logs of the running client with: nomad monitor -log-level=DEBUG -node-id=$node_id. It might also be helpful to capture the goroutine stack by making a request to the client agent's HTTP endpoint at /debug/pprof/goroutine?debug=2.

chenjpu commented 3 months ago

OK, I am happy to help with this problem

tgross commented 2 months ago

Doing a little issue cleanup. Going to close this out as unable to reproduce.