Plex HW acceleration in LXC container - anyone with success?

Sorry to revive this old topic, but I have much trouble enabling my nvidia card in my Ubuntu container. Maybe you guys can help.

I currently have a RTX2060 installed in my server running Proxmox (4.15.18-11-pve). My container is running Ubuntu 16.04

My output for ls -l /dev/dri is:

total 0
crw-rw---- 1 root video 226, 0 Mar 17 14:10 card0
crw-rw---- 1 root video 226, 1 Mar 18 07:27 card1
crw-rw---- 1 root video 226, 128 Mar 18 07:27 renderD128

My output for ls -l /dev/nvidia* is:

crw-rw-rw- 1 root root 195, 0 Mar 17 14:24 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Mar 17 14:24 /dev/nvidiactl

The lines I’ve added to my /etc/pve/lxc/101.conf (101 is my Plex container)

lxc.cgroup.devices.allow: c 226:* rwm
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 29:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/101/mount_hook.sh

then the code in my /var/lib/lxc/101/mount_hook.sh (not sure if I have to mount the /dev/nvidia folders since they showed up anyway. Also, one of cards might be an Matrox card that is used to control the VGA output on my motherboard, not sure which one is the nvidia card, so put them both in here)

mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card1 c 226 1
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/fb0 c 29 0
#mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/nvidia0 c 195 0
#mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/nvidiact1 c 195 255

I’ve installed both drivers, first on my host. Where I first had to install some kernel headers for pve. After this I installed the nvidia drivers, once with and once without kernel headers (--no-kernel-headers flag).

If I run nvidia-smi in my container. I see the output I should see:

±----------------------------------------------------------------------------+
| NVIDIA-SMI 418.43 Driver Version: 418.43 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 Off | 00000000:0A:00.0 Off | N/A |
| 23% 42C P0 1W / 160W | 0MiB / 5904MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

and the card, render device and nvidia folders are all present in the container. I’m not sure what I’m doing wrong.
The sources I used and kind of combined to hopefully work in my usecase:

https://bradford.la/2016/GPU-FFMPEG-in-LXC

Would be awesome if someone here could help me, the things I think could be going wrong are:

  • Running Ubuntu container with Debian host
  • Not all nvidia folder are seen on my host (like nvidia-uvm)

EDIT:
I have found a solution to the problem with some help on the forums:

  1. Have nvidia-persistenced running as service from boot using the install.sh script from: https://github.com/NVIDIA/nvidia-persistenced/blob/master/init/systemd/nvidia-persistenced.service.template
  2. Have the following devices loaded in: /etc/modules-load.d/modules.conf

nvidia
nvidia_uvm

After this all devices should show in with ls -l /dev/nvidia* and you add them using the guides given above