Plex HW acceleration in LXC container - anyone with success?

@4Me
@ChuckPA

in the Post 12 - the corrections are valid. Sorry for the typos…

I do not see threads and posts as you do.

Please quote the text which is correct?

@ChuckPA

The ohne from 4Me

@4Me
@Johnnyh1975

Edit’s applied. Please verify

@ChuckPA
lxc.autodev.hook needs to be changed additionally to lxc.hook.autodev

One last check?

@ChuckPA

From my side that‘s it. Thanks so much!

So, I’ve been digging all afternoon, trying to find a way to pass thru the USB TV Tuner that I have to my LXC Plex container on Proxmox. It took finding this guide and adapting it to my device, but I got it working I think, doing a channel scan with it now. WIll post a modified version of the guide to cover the Hauppauge USB tuner if it all works properly.

Just for clarification, do you need to have Plex pass for hw acceleration? Also if you are using a Xeon server will you still have an Intel card to do the pass through?

@xeroiv said:
Just for clarification, do you need to have Plex pass for hw acceleration? Also if you are using a Xeon server will you still have an Intel card to do the pass through?

Yes. HW acceleration is a PlexPass feature. An Intel QSV-capable CPU or other working GPU acceleration with a graphics head attached (for the user to resolve) is also required.

For anyone who ends up here from a search, the forum link above is dead. The correct link: https://forums.plex.tv/t/pms-installation-guide-when-using-a-proxmox-5-1-lxc-container/219728

I use LXC on Ubuntu with LXD, and needed a slightly different procedure to get HW acceleration working. Not claiming this is the most elegant solution; I’m sure it probably isn’t, but it worked for me.

  1. Look in /dev/dri on the host (not inside the container) and get the names of your card and render devices. Mine were card0 and renderD128.

  2. Add these devices to your LXC. My container is named “plex”.

lxc config device add plex /dev/dri/card0 unix-char path=/dev/dri/card0
lxc config device add plex /dev/dri/renderD128 unix-char path=/dev/dri/renderD128

  1. Create a systemd rc.local service to chmod these devices on startup. Enable it, run it.

My /etc/rc.local script:

#!/bin/sh -e
/bin/chmod 666 /dev/dri/card*
/bin/chmod 666 /dev/dri/render*
exit 0
  1. Play a video in Plex Web and verify that it’s hardware-accelerated. Search for “How can I tell when hardware acceleration is being used?” in this link.

https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/

Note: You may need to allow the cgroup devices as well. I set lxc.cgroup.devices.allow=a on my containers which basically turns off all fencing. I also run this in a privileged container.

i avoid using any external scripts by using lxc.hook.autodev to call all commands

lxc.cgroup.devices.allow = c 226:0 rwm
lxc.cgroup.devices.allow = c 226:128 rwm
lxc.cgroup.devices.allow = c 29:0 rwm
lxc.hook.autodev: sh -c “mkdir -p {LXC_ROOTFS_MOUNT}/dev/dri; mknod -m 666 {LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0; mknod -m 666 {LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128; mknod -m 666 {LXC_ROOTFS_MOUNT}/dev/fb0 c 29 0”

Sorry to revive this old topic, but I have much trouble enabling my nvidia card in my Ubuntu container. Maybe you guys can help.

I currently have a RTX2060 installed in my server running Proxmox (4.15.18-11-pve). My container is running Ubuntu 16.04

My output for ls -l /dev/dri is:

total 0
crw-rw---- 1 root video 226, 0 Mar 17 14:10 card0
crw-rw---- 1 root video 226, 1 Mar 18 07:27 card1
crw-rw---- 1 root video 226, 128 Mar 18 07:27 renderD128

My output for ls -l /dev/nvidia* is:

crw-rw-rw- 1 root root 195, 0 Mar 17 14:24 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Mar 17 14:24 /dev/nvidiactl

The lines I’ve added to my /etc/pve/lxc/101.conf (101 is my Plex container)

lxc.cgroup.devices.allow: c 226:* rwm
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 29:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/101/mount_hook.sh

then the code in my /var/lib/lxc/101/mount_hook.sh (not sure if I have to mount the /dev/nvidia folders since they showed up anyway. Also, one of cards might be an Matrox card that is used to control the VGA output on my motherboard, not sure which one is the nvidia card, so put them both in here)

mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card1 c 226 1
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/fb0 c 29 0
#mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/nvidia0 c 195 0
#mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/nvidiact1 c 195 255

I’ve installed both drivers, first on my host. Where I first had to install some kernel headers for pve. After this I installed the nvidia drivers, once with and once without kernel headers (--no-kernel-headers flag).

If I run nvidia-smi in my container. I see the output I should see:

±----------------------------------------------------------------------------+
| NVIDIA-SMI 418.43 Driver Version: 418.43 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 Off | 00000000:0A:00.0 Off | N/A |
| 23% 42C P0 1W / 160W | 0MiB / 5904MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

and the card, render device and nvidia folders are all present in the container. I’m not sure what I’m doing wrong.
The sources I used and kind of combined to hopefully work in my usecase:

https://bradford.la/2016/GPU-FFMPEG-in-LXC

Would be awesome if someone here could help me, the things I think could be going wrong are:

  • Running Ubuntu container with Debian host
  • Not all nvidia folder are seen on my host (like nvidia-uvm)

EDIT:
I have found a solution to the problem with some help on the forums:

  1. Have nvidia-persistenced running as service from boot using the install.sh script from: https://github.com/NVIDIA/nvidia-persistenced/blob/master/init/systemd/nvidia-persistenced.service.template
  2. Have the following devices loaded in: /etc/modules-load.d/modules.conf

nvidia
nvidia_uvm

After this all devices should show in with ls -l /dev/nvidia* and you add them using the guides given above

Note: I don’t use this setup anymore, sharing my GPU just became not worth it with my latest hardware. In a later post tijmenvn mentions needing a different command for the devices.allow with newer versions of Proxmox so please be aware this may no longer work without modifications

I recently got the “opportunity” to reinstall my ProxMox server which houses my plex container. I kept notes this time and wanted to post them here in case they can help anyone else.

Note: All of the following commands should be run as root

Install fresh ProxMox baseline and update Prox Mox sources
Per guide here: https://www.servethehome.com/proxmox-ve-5-initial-installation-checklist/
or here: Proxmox VE 6 Initial Installation Checklist - ServeTheHome

nano /etc/apt/sources.list
- Add: deb http://download.proxmox.com/debian stretch pve-no-subscription

nano /etc/apt/sources.list.d/pve-enterprise.list
- Comment Out: # deb https://enterprise.proxmox.com/debian stretch pve-enterprise

Remove Prox Mox Nag Screen
Per guide here: Remove Proxmox Subscription Notice (Tested to 8.0) | John's Computer Services
sed -i.bak "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service

Update Prox Mox to latest
apt update && apt dist-upgrade -y

Reboot after the upgrade
reboot

Install sudo, git, gcc, make and header files
apt-get install sudo git gcc make pve-headers-$(uname -r)

Install latest nvidia driver
Using list here as reference: GitHub - keylase/nvidia-patch: This patch removes restriction on maximum number of simultaneous NVENC video encoding sessions imposed by Nvidia to consumer-grade GPUs.
mkdir /opt/nvidia
cd /opt/nvidia
wget https://download.nvidia.com/XFree86/Linux-x86_64/418.56/NVIDIA-Linux-x86_64-418.56.run
chmod +x NVIDIA-Linux-x86_64-418.56.run
./NVIDIA-Linux-x86_64-418.56.run --no-questions --ui=none --disable-nouveau

Driver will will create /etc/modprobe.d/nvidia-installer-disable-nouveau.conf and disable the nouveau driver. Verify this by checking the contents of the created .conf file
more /etc/modprobe.d/nvidia-installer-disable-nouveau.conf

#generated by nvidia-installer
blacklist nouveau
options nouveau modeset=0

Reboot to disable nouveau drivers
reboot

Run the nvidia installer which will now complete the driver install.
/opt/nvidia/NVIDIA-Linux-x86_64-418.56.run --no-questions --ui=none

Check that nvidia-smi works now
nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.56       Driver Version: 418.56       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 980 Ti  Off  | 00000000:43:00.0 Off |                  N/A |
| 23%   61C    P0    72W / 275W |      0MiB /  6075MiB |      2%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Create/update the modules.conf file for boot
nano /etc/modules-load.d/modules.conf

# /etc/modules-load.d/modules.conf
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
nvidia
nvidia_uvm

Generate the initramfs image with the new modules.conf
update-initramfs -u

Create rules to load the drivers on boot for both nvidia and nvidia_uvm
nano /etc/udev/rules.d/70-nvidia.rules

# /etc/udev/rules.d/70-nvidia.rules
# Create /nvidia0, /dev/nvidia1 … and /nvidiactl when nvidia module is loaded
KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L'"
#
# Create the CUDA node when nvidia_uvm CUDA module is loaded
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u'"

Install GitHub - NVIDIA/nvidia-persistenced: NVIDIA driver persistence daemon
git clone https://github.com/NVIDIA/nvidia-persistenced.git
cd nvidia-persistenced/init
./install.sh

Checking for common requirements...
  sed found in PATH?  Yes
  useradd found in PATH?  Yes
  userdel found in PATH?  Yes
  id found in PATH?  Yes
Common installation/uninstallation supported

Creating sample System V script... done.
Creating sample systemd service file... done.
Creating sample Upstart service file... done.

Checking for systemd requirements...
  /usr/lib/systemd/system directory exists?  No
  /etc/systemd/system directory exists?  Yes
  systemctl found in PATH?  Yes
systemd installation/uninstallation supported

Installation parameters:
  User  : nvidia-persistenced
  Group : nvidia-persistenced
  systemd service installation path : /etc/systemd/system

Adding user 'nvidia-persistenced' to group 'nvidia-persistenced'... done.
Installing sample systemd service nvidia-persistenced.service... done.
Enabling nvidia-persistenced.service... done.
Starting nvidia-persistenced.service... done.

systemd service successfully installed.

Double check that the service is running and enabled
systemctl status nvidia-persistenced

nvidia-persistenced.service - NVIDIA Persistence Daemon
   Loaded: loaded (/etc/systemd/system/nvidia-persistenced.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-04-04 13:45:44 CDT; 38s ago
  Process: 13356 ExecStart=/usr/bin/nvidia-persistenced --user nvidia-persistenced (code=exited, status=0/SUCCESS)
 Main PID: 13362 (nvidia-persiste)
    Tasks: 1 (limit: 19660)
   Memory: 996.0K
      CPU: 262ms
   CGroup: /system.slice/nvidia-persistenced.service
           └─13362 /usr/bin/nvidia-persistenced --user nvidia-persistenced

Apr 04 13:45:44 ripper systemd[1]: Starting NVIDIA Persistence Daemon...
Apr 04 13:45:44 ripper nvidia-persistenced[13362]: Started (13362)
Apr 04 13:45:44 ripper systemd[1]: Started NVIDIA Persistence Daemon.

Reboot and verify all nvidia devices come up
reboot
ls -l /dev/nv*

crw-rw-rw- 1 root root 195,   0 Apr  4 13:49 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Apr  4 13:49 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Apr  4 13:49 /dev/nvidia-modeset
crw-rw-rw- 1 root root 235,   0 Apr  4 13:49 /dev/nvidia-uvm
crw-rw-rw- 1 root root 235,   1 Apr  4 13:49 /dev/nvidia-uvm-tools

Patch nvidia driver to remove max encoding sessions
cd /opt/nvidia
git clone https://github.com/keylase/nvidia-patch.git
cd nvidia-patch
./patch.sh

Detected nvidia driver version: 418.56
Attention! Backup not found. Copy current libnvcuvid.so to backup.
751706615c652c4725d48c2e0aaf53be1d9553d5  /opt/nvidia/libnvidia-encode-backup/libnvcuvid.so.418.56
ee47ac207a3555adccad593dbcda47d8c93091c0  /usr/lib/x86_64-linux-gnu/libnvcuvid.so.418.56
Patched!

In proxmox create a new LXC container for plex (I use the ubuntu 18.10 standard container template) but do not start the container yet. First edit the container conf file (in my case, container 100) and add the lxc cgroup and mounts to the end of the .conf file
nano /etc/pve/lxc/100.conf

lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 235:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file

Note 1: the two groups above (195 and 235) come from the ls -l /dev/nv* output earlier

Note 2: In some cases the group for -uvm and -uvm-tools will change on reboot. In my case they toggle between 235 and 511. If you find this happening add all the groups that you see occuring to the allow list to prevent needing to change the .conf file constantly. For example, mine currently says:
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 235:* rwm
lxc.cgroup.devices.allow: c 511:* rwm

Start the lxc container and download the same nvidia driver to the container.
Run the install but do not install the driver. The installer asks me if I want to install libglvnd
because I have an incomplete install. I have always said “Don’t install libglvnd”. Everything
else I answer yes to and ignore the warnings.
mkdir /opt/nvidia
cd /opt/nvidia
wget https://download.nvidia.com/XFree86/Linux-x86_64/418.56/NVIDIA-Linux-x86_64-418.56.run
chmod +x NVIDIA-Linux-x86_64-418.56.run
./NVIDIA-Linux-x86_64-418.56.run --no-kernel-module

Run nvidia-smi to check the install, also check that the nvidia devices are present.
nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.56       Driver Version: 418.56       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 980 Ti  Off  | 00000000:43:00.0 Off |                  N/A |
|  0%   60C    P8    35W / 275W |      1MiB /  6075MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

ls -l /dev/nv*

crw-rw-rw- 1 nobody nogroup 195, 254 Apr  4 19:20 /dev/nvidia-modeset
crw-rw-rw- 1 nobody nogroup 235,   0 Apr  4 19:20 /dev/nvidia-uvm
crw-rw-rw- 1 nobody nogroup 235,   1 Apr  4 19:20 /dev/nvidia-uvm-tools
crw-rw-rw- 1 nobody nogroup 195,   0 Apr  4 19:20 /dev/nvidia0
crw-rw-rw- 1 nobody nogroup 195, 255 Apr  4 19:20 /dev/nvidiactl

Install the latest version of plex in the container
wget https://downloads.plex.tv/plex-media-server-new/1.15.3.858-fbfb913f7/debian/plexmediaserver_1.15.3.858-fbfb913f7_amd64.deb
dpkg -i plexmediaserver_1.15.3.858-fbfb913f7_amd64.deb

Selecting previously unselected package plexmediaserver.
(Reading database ... 16954 files and directories currently installed.)
Preparing to unpack plexmediaserver_1.15.3.858-fbfb913f7_amd64.deb ...
Unpacking plexmediaserver (1.15.3.858-fbfb913f7) ...
Setting up plexmediaserver (1.15.3.858-fbfb913f7) ...
Created symlink /etc/systemd/system/multi-user.target.wants/plexmediaserver.service -> /lib/systemd/system/plexmediaserver.service.
Processing triggers for libc-bin (2.28-0ubuntu1) ...
Processing triggers for mime-support (3.60ubuntu1) ...

Log into plex and enable hardware acceleration

Note: To verify that your plex is kicking off a hardware acceleration thread, you will need to call nvidia-smi on the host, not from within the lxc container. From there, you should see something like this:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.09       Driver Version: 430.09       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 980 Ti  On   | 00000000:42:00.0 Off |                  N/A |
| 28%   60C    P2    78W / 275W |    145MiB /  6075MiB |      3%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0    123699      C   /usr/lib/plexmediaserver/Plex Transcoder     132MiB |
+-----------------------------------------------------------------------------+
22 Likes

Wow man, this is freaking awesome work. If only this post existed 2 weeks earlier. This is litterally everything I’d also done, although I wouldn’t be able reproduce them without undoing everything I did, and redo like you’ve done. Again, awesome work. Definetely gonna bookmark this post just in case I need it in the future!

This “opportunity”, you had any choice in this matter? Or did you simply wanted to find out the right steps to implement this once and for all?

EDIT:
One parameter you could put during installation of the Nvidia driver on the host side is --disable-nouveau. This way following this guide poeple are less prone to check the wrong box during installation of Nvidia drivers :wink:

This “opportunity”, you had any choice in this matter? Or did you simply wanted to find out the right steps to implement this once and for all?

CPU crapped out and got an RMA and while the system did ‘boot’ into the original install, it had some hiccups so I wanted to start over. It seemed like a good idea at the time. I had some notes, but I certainly left a few steps out that I had to re-remember. It wasn’t a fun memory game. :rofl:

One parameter you could put during installation of the Nvidia driver on the host side is --disable-nouveau . This way following this guide poeple are less prone to check the wrong box during installation of Nvidia drivers :wink:

This is a good note but one I’ve never tried. I’ll add it in as notes so people can use it while going through the guide.

Damn, aah well, look at it from the bright side. I do know there will be people glad your CPU gave in and made you write up the guide above.

Maybe together with @ChuckPa we could make this an official guide?
More and more people have trouble forwarding their Nvidia GPU to an LXC container. This is now an awesome guide nobody knows about

This is an amazing tutorial, thank you SO much for taking the time to write it and share it. It worked brilliantly! I feel this should be on the official Plex docs for people who want to run their own LXC Plex container.

One question - how can you make sure that GPU acceleration is used? I can’t be sure if I’m using Intel Quick-Sync or the Nvidia GPU - and I’d like to be absolutely sure that it’s the GPU (in fact, I prefer the GPU, since it would leave my CPU alone to do other things)

Sorry for the slow reply, if you kick off a GPU transcode you should be able to see the process on the proxmox server with the nvidia-smi command.