Closed jswager closed 6 years ago
volume_options, labels, driver_config, and options should all be lists when submitted as json. For example:
"mounts": [
{
"source": "name-of-volume",
"target": "/path/in/container",
"volume_options": [
{
"labels": [
{
"foo": "bar"
}
],
"no_copy": false,
"driver_config": [
{
"name": "flocker",
"options": [
{
"foo": "bar"
}
]
}
]
}
],
"readonly": false
}
]
based on the sample mount HCL on the page you linked.
I've got to learn how to translate from HCL to JSON....
Thank you, this worked perfectly!
@jswager, I will confess that I frequently use the HCL version and then run nomad inspect
to extract the properly formatted JSON when I just get stuck on one of the more complicated transformations.
@jswager, I will confess that I frequently use the HCL version and then run
nomad inspect
to extract the properly formatted JSON when I just get stuck on one of the more complicated transformations.
As someone brand new to HashiCorp, thank you for this comment on an issue closed almost four years ago @angrycub !
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Nomad version
Nomad v0.7.0
Operating system and Environment details
Ubuntu 14.04, Docker 17.03.1-ce
Issue
Using https://www.nomadproject.io/docs/drivers/docker.html#mounts to create Docker based mounts. When using this feature, the error "'mounts[0].volume_options': source data must be an array or slice, got map" is received. Removing "volume_options" results in a running job (no proper volumes of course).
I am using the REST API, rather than the Nomad command line. Following examples are in JSON due to the REST API.
The following Tasks configuration snippet will fail:
Reducing to this configuration will start the job:
Reproduction steps
Using any working Docker based job, add the failing configuration and the failure should be seen. Moving to the working configuration should at least allow the job to stop.
Nomad Server logs (if appropriate)
From the allocations log:
"DriverError": "failed to initialize task \"task-generic-splat-task-noncanary-641d06ddf9eb49909485a9b516097a6a\" for alloc \"84148178-d740-7908-714f-3335fd9b815d\": 1 error(s) decoding:\n\n* 'mounts[0].volume_options': source data must be an array or slice, got map",