Hardware Accelerated Decode (Nvidia) for Linux

Just in case anyone comes here later, I have a more complete version of this guide here: Plex HW acceleration in LXC container - anyone with success?

You can actually get it to work with ubuntu in the container and debian in proxmox. It just requires a few tweaks to a lot of the guides.

You’ll need to make sure on your proxmox side you have your headers installed: apt-get install pve-headers-########-pve

Where ####### is whatever the result from uname -r is, ie: apt-get install pve-headers-4.4.35–2-pve

First step is to install whichever nvidia driver version you want on the proxmox side. I usually look up the latest version here: https://github.com/keylase/nvidia-patch

If it asks you to blacklist nouveau drivers, you will need to do that. Once that’s complete and you reboot, give a nvidia-smi a check and make sure it gives you the nvidia info.

Next, you’ll want to run nvidia-persistenced (https://github.com/NVIDIA/nvidia-persistenced/blob/master/init/systemd/nvidia-persistenced.service.template) on your proxmox side. This keeps the driver running which makes sure all the /dev/ are created on boot and remain. If you don’t run persistanced then those will go away and cause the container to not be able to use those /dev/

Once that’s all working you’ll do ls /dev/nvidia* -l to see what the cgroup is (this is in most of the guides) you should end up with 5 items. This is in most of the guides, if you’re missing one, you’re not running persistenced:
crw-rw-rw- 1 root root 195, 0 Feb 19 11:09 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Feb 19 11:09 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Feb 19 11:09 /dev/nvidia-modeset
crw-rw-rw- 1 root root 511, 0 Feb 19 11:09 /dev/nvidia-uvm
crw-rw-rw- 1 root root 511, 1 Feb 19 11:09 /dev/nvidia-uvm-tools

You’ll add the two cgroups (in the above case 195, 511) to your .conf file for that container, similar to this:
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 511:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file

At this point boot up your container and I’ll typically do a ls /dev/nvidia* -l in the container to see if I see the same 5 entries from proxmox. If I do, that’s a good sign that I’ve got the container setup right

then run the same nvidia install you used on the proxmox install, but add --no_kernel (I think, but check --advanced_options I can’t find my notes on that parameter). This will install all the nvidia tools on the container. It might ask you if you want to overwrite some libraries, I typically say no and haven’t had issues.

At this point, you should be able to an nvidia-smi here and get a similar screen to what’s on the proxmox side.

Some quirks to be aware of:
If you install more things on the proxmox side your cgroups can change. If they do you’ll need to update your .conf file. I typically check this if I’ve been spinning up lots of new containers/VMs just in case.

If you’re checking nvidia-smi to see if there are decodes/encodes are happening, you’ll need to do it on the proxmox side, NOT the container side. I’ve noticed those won’t show up on the container side nvidia-smi

I may have missed something here but that’s my process in a nut shell.

2 Likes