Server Version#: 1.32.4.7195
Player Version#: 9.25.0.2374 (Android)
Hoping someone can provide knowledge on this. I’ve got a new Proxmox 8 server in which I created an LXC Container (debian 12) for Plex. I know that Plex-Docker supports Hardware transcoding, so I figure it would for LXC as well. Anyway, after fiddling around with it for a couple hours, I finally have my NVIDIA T400 visible in the container. I have also enabled Hardware Transcoding in the Plex web interface for the server. However, when I stream to the plex android app (only app i’ve tried so far) and transcode the video, I notice in the web interface the “(hw)” isn’t next to the word Trascoding. Is there something that I need to do akin to the Docker requirements for an LXC Container that I haven’t done yet?
For reference, here is a screenshot of the nvidia-smi inside the container
plex was not a member of the “sgx” group when I first installed it, however, reinstalling PMS put the user in the group.
checkingls -ld /dev/dri shows that the owner of the directory is root I don’t like the idea of putting the plex user into the root group, but just for testing, I did, and i was still not using the HW transcoder.
I decided to try installing the beta PMS that you mentioned (1.32.5.7210) and I’m not seeing an option under Transcoder to select a GPU. [EDIT: I’m blind, I see the option now and forced the GPU, still no luck]
I also ran nvidia-smi while transcoding just to ensure that it really wasn’t hardware encoding, and sure enough the card was still sitting idle.
I’m going to keep digging into this rabbit hole and will keep this post updated if I find any solution, but any other thoughts would be greatly appreciated.
I found something of use, though I haven’t narrowed it down exactly. While looking at the “Console” I see the following error message come up repeatedly.
MDE: unable to find a working transcode profile for video stream
The first thing that a quick google search of that brings up is a reddit thread that basically says “its a permission issue”. There is a helpful link in that thread to a page that explains permissions and how plex should be set up, but unfortunately, it seems everything is already right on that end (I think). I admit, I am not an expert in Linux, I’m still learning, but here is the screenshot of the ls -la /mnt/plex/Media - the folder that stores my media.
I guess I forgot to mention earlier that my media is stored in a TrueNAS SCALE VM on the same Proxmox server. which is why it’s in a /mnt folder. I did try creating a plex user inside TrueNAS and giving that user/group ownership of the folder that my media is stored in.
this will probably be of interest and something to check further.
root@pve:/etc/modprobe.d# ls
pve-blacklist.conf
root@pve:/etc/modprobe.d# cat pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE
# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
root@pve:/etc/modprobe.d#
Jul 04, 2023 06:44:24.868 [140385850141496] DEBUG - [Req#93/Transcode] Cleaning directory for session 5kgkrx9r78wnq104qccbaobx ()
Jul 04, 2023 06:44:24.868 [140385850141496] DEBUG - [Req#93/Transcode] Starting a transcode session 5kgkrx9r78wnq104qccbaobx at offset -1.0 (state=3)
Jul 04, 2023 06:44:24.868 [140385850141496] DEBUG - [Req#93/Transcode] TPU: hardware transcoding: enabled, but no hardware decode accelerator found
Jul 04, 2023 06:44:24.868 [140385850141496] DEBUG - [Req#93/Transcode] [Universal] Using local file path instead of URL: /mnt/plex/Media/Movies/Weird_ The Al Yankovic Story (2022).m4v
Jul 04, 2023 06:44:24.868 [140385850141496] DEBUG - [Req#93/Transcode] TPU: hardware transcoding: final decoder: , final encoder:
Jul 04, 2023 06:44:24.868 [140385850141496] DEBUG - [Req#93/Transcode/JobRunner] Job running: FFMPEG_EXTERNAL_LIBS='/var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/Codecs/8217c1c-4578
Did you install libnvidia-decode and libnvidia-encode ?
If these were working, we’d see the enumeration to the file system listed (/dev/dri)
This is the complete list of Nvidia driver packages installed on my Ubuntu 22 system
( I have some development packages you won’t need )
[chuck@glockner ~.1997]$ dpkg -l | grep nvidia
ii gpustat 0.6.0-1 all pretty nvidia device monitor
ii libnvidia-cfg1-525-server:amd64 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA binary OpenGL/GLX configuration library
ii libnvidia-common-525-server 525.125.06-0ubuntu0.22.04.1 all Shared files used by the NVIDIA libraries
ii libnvidia-compute-525-server:amd64 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA libcompute package
ii libnvidia-decode-525-server:amd64 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA Video Decoding runtime libraries
ii libnvidia-egl-wayland1:amd64 1:1.1.9-1.1 amd64 Wayland EGL External Platform library -- shared library
ii libnvidia-encode-525-server:amd64 525.125.06-0ubuntu0.22.04.1 amd64 NVENC Video Encoding runtime library
ii libnvidia-extra-525-server:amd64 525.125.06-0ubuntu0.22.04.1 amd64 Extra libraries for the NVIDIA Server Driver
ii libnvidia-fbc1-525-server:amd64 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library
ii libnvidia-gl-525-server:amd64 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
ii libnvidia-ml-dev:amd64 11.5.50~11.5.1-1ubuntu1 amd64 NVIDIA Management Library (NVML) development files
ii nvidia-compute-utils-525-server 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA compute utilities
ii nvidia-cuda-dev:amd64 11.5.1-1ubuntu1 amd64 NVIDIA CUDA development files
ii nvidia-cuda-gdb 11.5.114~11.5.1-1ubuntu1 amd64 NVIDIA CUDA Debugger (GDB)
ii nvidia-cuda-toolkit 11.5.1-1ubuntu1 amd64 NVIDIA CUDA development toolkit
ii nvidia-cuda-toolkit-doc 11.5.1-1ubuntu1 all NVIDIA CUDA and OpenCL documentation
ii nvidia-dkms-525-server 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA DKMS package
ii nvidia-driver-525-server 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA Server Driver metapackage
ii nvidia-kernel-common-525-server 525.125.06-0ubuntu0.22.04.1 amd64 Shared files used with the kernel module
ii nvidia-kernel-source-525-server 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA kernel source package
ii nvidia-opencl-dev:amd64 11.5.1-1ubuntu1 amd64 NVIDIA OpenCL development files
ii nvidia-prime 0.8.17.1 all Tools to enable NVIDIA's Prime
ii nvidia-profiler 11.5.114~11.5.1-1ubuntu1 amd64 NVIDIA Profiler for CUDA and OpenCL
ii nvidia-utils-525-server 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA Server Driver support binaries
ii nvidia-visual-profiler 11.5.114~11.5.1-1ubuntu1 amd64 NVIDIA Visual Profiler for CUDA and OpenCL
ii xserver-xorg-video-nvidia-525-server 525.125.06-0ubuntu0.22.04.1 amd64 NVIDIA binary Xorg driver
[chuck@glockner ~.1998]$
This seems to be the right path, I’m just still not getting there. One of the several guides I originally followed had me download and install the NVIDIA drivers from NVIDIA’s website, so I uninstalled those, and installed libnvidia-encode1 package (that’s the package as listed on Debian’s wiki). This also installed the libnvcuvid1 package, which is Debian’s NVDEC support.
Unfortunately, I’m still not hardware transcoding, and it is getting late, so I am giving up for the night, though I will continue this search tomorrow.
here is my dpkg -l | grep NVIDIA as it stands now.
So, after tossing and turning a bit last night, I got up to try a few more things.
First, I found the Debian page on Hardware Video Acceleration. Now that I’m fully awake and re-reading it, I realized that it looks like the NVIDIA proprietary drivers are just not supported in most cases (ie: Chromium-based and Firefox browsers)
I then spun up an Ubuntu 22.04 (and subsequently a 23.04) LXC container and found the NVIDIA driver version doesn’t match the version that is in my Proxmox host. Since containers use the Kernel driver from the host, they need to match in order to work. So that was out.
Game plan for now: Attempt to uninstall NVIDIA Proprietary drivers and install the Open Source Nouveau drivers so that I can install the VA-API and see if that works.
Failing that, I will pass the GPU through to an Ubuntu VM instead. I was avoiding using a VM over a container because of the extra hardware overhead, but it might be my only option.
My containers is running Debian 12. I had already removed all NVIDIA drivers before you got back to me saying not to use Nouveau, so I get to start from scratch again. so far I’ve installed nvidia-driver That seems to have included nvidia-vdpau-driver, libnvidia-encode1 and libnvcuvid1
Perhaps I’m reading this wrong, but this is what I’m seeing (and why I was thinking about trying Nouveau)
The three main APIs that are in use are VA-API, VDPAU, and NVENC/NVDEC.
VA-API - Supported on Intel, AMD, and NVIDIA (only via the open-source Nouveau drivers). Widely supported by software, including Kodi, VLC, MPV, Chromium, and Firefox. Main limitation is lacking any support in the proprietary NVIDIA drivers.
VDPAU - Supported fully on AMD and NVIDIA (both proprietary and Nouveau). Supported by most desktop applications like Kodi, VLC, and MPV, but has no support at all in Chromium or Firefox. Main limitations are poor and incomplete Intel support and not working with browsers for web video acceleration.
NVENC/NVDEC - A proprietary API supported exclusively by NVIDIA. Only supported in a few major applications (FFmpeg and OBS Studio for encoding, FFmpeg and MPV for decoding). Main limitation is limited software and hardware support across the board because of its proprietary nature.
I was reading this mean that software trying to use Hardware Acceleration would need the VA-API which isn’t supported by the NVIDIA proprietary drivers.
VAAPI is for Intel CPUs (Intel Quick Sync Video) and now also supports AMD GPUs with VAAPI.
Please be EXTREMELY careful to not confuse browsers using GPU cards for HTML acceleration (game/web page graphics) -VS- streaming video.
(There is nothing to accelerate when displaying an image stream)
Nvidia discrete GPUs are still supported with nvdec and nvenc.
nvidia-drivers-server (which gave me the 525 drivers)
libnvidiadecode
libnvidiaencode
That’s it . that simple. (nvidia-smi comes with the main server package)
Instead of doing this in LXC, because of how /dev/dri is passed through, have you considered a docker container (which really does a ‘device’ class passthrough) ?
I’m still learning all of this stuff, but it looks like Proxmox doesn’t support docker, meaning I’d have to install an ubuntu-server VM, pass through the GPU to that, then install docker and do it that way. If I’m doing that, not sure why I wouldn’t just pass the GPU through to my TrueNAS VM and run it there, or, just run plex in an ubuntu server VM without Docker.
I do truly appreciate all of your support. I am going to try and use TrueNAS SCALE since there is a plex app for that and if that doesn’t work I’ll spin up an ubuntu VM.