Closed jippi closed 9 years ago
There is one https://nomadproject.io/docs/drivers/exec.html
There are even Qemu and JVM drivers.
Every time I execute a job with exec
my mount points gets messed up - some goes missing and it remounts the data dir as well -- currently it doesn't seem like the job is actually executed either, as nomad.out
is nowhere to be found.
Example:
job "example" {
# region = "global"
datacenters = ["online"]
type = "batch"
priority = 50
update {
# Stagger updates every 10 seconds
stagger = "10s"
# Update a single task at a time
max_parallel = 1
}
group "demo" {
# Control the number of instances of this groups.
# Defaults to 1
# count = 1
# Define a task to run
task "date" {
driver = "exec"
config {
command = "/bin/date > nomad.out"
}
resources {
cpu = 500 # 500 Mhz
}
}
}
}
gives
==> Monitoring evaluation "c8587fb6-8043-d59c-db4e-32b9cce42997"
Evaluation triggered by job "example"
Allocation "b91276d6-cdd4-815b-48b6-d10d6377fe02" modified: node "ee0bd4c4-c1df-2ca1-c01d-7ac4c2138769", group "demo"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "c8587fb6-8043-d59c-db4e-32b9cce42997" finished with status "complete"
and makes a new mount point on the box
/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261 12G 7.5G 3.2G 71% /opt/nomad/data/alloc/b91276d6-cdd4-815b-48b6-d10d6377fe02/date/alloc
and umounts devtmpfs
and proc
before exec
-> df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.0G 216K 3.0G 1% /run
/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261 12G 7.4G 3.2G 70% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 6.1G 0 6.1G 0% /run/shm
after exec
-> df -h
df: `devtmpfs': No such file or directory
df: `proc': No such file or directory
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.0G 220K 3.0G 1% /run
/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261 12G 7.5G 3.2G 71% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 6.1G 0 6.1G 0% /run/shm
/dev/disk/by-uuid/2778df7a-f8b9-4dc0-b2ba-71d76aea0261 12G 7.5G 3.2G 71% /opt/nomad/data/alloc/b91276d6-cdd4-815b-48b6-d10d6377fe02/date/alloc
Hey,
Would you mind describing your environment? The mounts are a bit odd because they get cleaned up with the allocation is destroyed. This is hours after the job finishes which in a production environment gives you time to look at the logs or ship them. But when running locally it is undesirable. We will work on that.
BTW the stdout and stderr logs can be found in the tasks local directory. This can be found at /nomad_alloc_dir/alloc_id/task_name/local/task_name.stderr/stdout
Hi,
I'm running the 5 x nomad servers and 7 x nomad clients - with v0.1.0 - each on its own dedicated vm.
both the nomad servers and nomad clients are running inside a QEMU / KVM instance running Debian 7.8 with a custom 4.0.6 kernel
Is it possible to make a executor that won't touch anything, but simple exec a command without any server modifications ? :) I'm not looking for nomad for resource isolation, but simply a distributed executor in place of supervisord - which have plenty of pain points for us currently.
example fstab
UUID=2778df7a-f8b9-4dc0-b2ba-71d76aea0261 / ext4 rw,noatime,nodiratime,discard,nouser_xattr,barrier=0,data=ordered,errors=remount-ro 0 1
UUID=71570a26-77ff-4b86-a26d-9531dd0b4f35 none swap sw 0 0
UUID=a779cc49-20be-4f23-bb5c-72d9f6713f54 /var/spool/postfix/ ext4 rw,noatime,nodiratime,discard,nouser_xattr,barrier=0,data=ordered,errors=remount-ro 0 0
tmpfs /tmp tmpfs rw,noatime,nodiratime,noatime,size=5g
From my understanding of the code, the exec driver relies uses the executor to run your task within a per-task chroot. It will make some mounts available within this chroot but most won't be. Which mounts are accessible is currently not configurable. I have the same requirement as you and I'd be willing to attempt to write the code if there's consensus on a design that's likely to be accepted.
+1 simple exec (no cgroups, docker or resource isolation)
for instance, to run "puppet apply" command in the hosts.
Just as an update, this is something we plan to support.
We now have a raw_exec driver!
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
hi,
a nice feature would be to have a very simple exec driver, that just runs a command
no fancy docker, cgroups or resource isolation - just running a command
would allow me to replace supervisord with nomad, and get a lot of the features I'm missing from supervisor backed right into nomad