bpg / terraform-provider-proxmox

Terraform Provider for Proxmox
https://registry.terraform.io/providers/bpg/proxmox
Mozilla Public License 2.0
851 stars 138 forks source link

Support for LXC config such as GPU passthrough options, etc #256

Open Tumetsu opened 1 year ago

Tumetsu commented 1 year ago

I'm in a pickle with automating lxc container creation where the resulting container needs to have access to the host's disk via bind mounting. Additionally I'm trying to setup Jellyfin media streaming container which also needs iGPU passthrough. To get these things to work, the /etc/pve/lxc/<vm_id>.config file requires the following extra lines:

 # Mount point
mp0: /mnt/media/media,mp=/mnt/media
# GPU passthrough
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

Would it be possible to add at least bind mount definition to the TF since that is an option I need in multiple containers?

Alternatively how would you approach automating this kind of configuration? For now I suppose I could live with some non-declarative method of setup.

For now I managed to cobble together a hackish way which edits config file via provisioner:

locals {
  vm_id = 2000
  config_file_path = "/etc/pve/lxc/${local.vm_id}.conf"
  media_mount_point = "/mnt/media/media,mp=/mnt/media"
  # TODO: ATTENTION! If editing these values, you should manually force the replacement of the Proxmox container!
  # Otherwise the config file will get duplicate lines of config because of how the grep works. This could be
  # fixed by some clever sed command though...
  commands = [
    "echo 'Wait for 10 seconds so that the container is surely created'",
    "sleep 10",
    "echo 'Apply mount point option to the container config in ${local.config_file_path}'",
    "grep -qxF 'mp0: ${local.media_mount_point}' ${local.config_file_path} || echo 'mp0: ${local.media_mount_point}' >> ${local.config_file_path}",
    "echo 'Apply iGPU passthrough options to the container config in ${local.config_file_path}'",
    "grep -qxF 'lxc.cgroup2.devices.allow: c 226:0 rwm' ${local.config_file_path} || echo 'lxc.cgroup2.devices.allow: c 226:0 rwm' >> ${local.config_file_path}",
    "grep -qxF 'lxc.cgroup2.devices.allow: c 226:128 rwm' ${local.config_file_path} || echo 'lxc.cgroup2.devices.allow: c 226:128 rwm' >> ${local.config_file_path}",
    "grep -qxF 'lxc.cgroup2.devices.allow: c 29:0 rwm' ${local.config_file_path} || echo 'lxc.cgroup2.devices.allow: c 29:0 rwm' >> ${local.config_file_path}",
    "grep -qxF 'lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file' ${local.config_file_path} || echo 'lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file' >> ${local.config_file_path}",
    "grep -qxF 'lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir' ${local.config_file_path} || echo 'lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir' >> ${local.config_file_path}",
    "grep -qxF 'lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file' ${local.config_file_path} || echo 'lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file' >> ${local.config_file_path}",
    "pct reboot ${local.vm_id}"
  ]
}

resource "proxmox_virtual_environment_container" "jellyfin" {
... 
}

resource "null_resource" "set_gpu_passthrough_and_mount_point_on_host" {
  depends_on = [proxmox_virtual_environment_container.jellyfin]
  # Hack to trigger rerun everytime commands have been edited.
  triggers = {
    always_run = jsonencode(local.commands)
  }
  connection {
    type        = "ssh"
    user        = var.proxmox_host_root_username
    private_key = file("~/.ssh/id_rsa")
    host        = var.proxmox_host_ip
  }
  provisioner "remote-exec" {
    inline = local.commands
  }
}
github-actions[bot] commented 1 year ago

Marking this issue as stale due to inactivity in the past 180 days. This helps us focus on the active issues. If this issue is reproducible with the latest version of the provider, please comment. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

shayne commented 1 year ago

Perhaps general support for adding to the container (or VMs!) config file would be better. It would also allow for other entries, such as cgroup definitions.

romner-set commented 11 months ago

I isolate nearly all of my LXCs and VMs into one half of the CPU to run a VFIO setup on the other half, which requires this line in every LXC config: lxc.cgroup2.cpuset.cpus: 8-15,24-31. Being able to add arbitrary lines into LXC and VM config files would be incredibly useful.

samuel-emrys commented 9 months ago

I assume that the ability to issue PUT, POST or GET requests on the container config would be required in order to be able to set arbitrary parameters - this provider is presumably limited by the interface provided by proxmox, right?

I'm looking to set up some idmap configuration to manage my bind mounts a bit more granularly (lxc.idmap), but I can't see anything in the API that would allow this - unless I'm missing something? The candidate endpoints that I thought might satisfy this need were:

Edit: or, is this possible via mp[n] mountoptions=<opt[;opt...]> in POST /nodes/{node}/lxc?

Edit2: no, disregard. This refers to noatime, nodev, nosuid and noexec as per this patch

+my $mount_option = qr/(noatime|nodev|nosuid|noexec)/;
+
+sub get_mount_options {
+    return $mount_option;
+}
marlop352 commented 5 months ago

PUT https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/lxc/{vmid}/config

more specifically this attribute dev[n] string [[path=]<Path>] [,gid=<integer>] [,mode=<Octal access mode>] [,uid=<integer>] Device to pass through to the container

would be enough for GPU Passthrough to run Jellyfin for me

on the GUI I would go to the "Resources" section of the container, then "Add" and select "Device passthrough" and use the following configs (values for Ubuntu 24): image image (images are from editing already added devices but the "Add" screen looks the same)

that would result in the following lines in the config file:

dev0: /dev/dri/card0,gid=44
dev1: /dev/dri/renderD128,gid=993

Bind mounts seem to have been solved in https://github.com/bpg/terraform-provider-proxmox/pull/394

Cadair commented 3 months ago

My main usecase for this is to do uid/gid mapping for my mountpoints i.e. lxc.idmap: u 0 100000 65535 etc.