HEVC transcode option missing?

Server Version#: 1.41.4.9399
Player Version#: 8.44 (iOS)

Hey, is there any specific requirement for the new HEVC transcoding?

Seems I’m missing any related option in the server transcoding settings.

I’m running the official docker image (beta tag) on Debian.

H265 is only supported with hardware encoding. Your GPU needs to support it. What hardware are you running Plex on?

1 Like

It’s a Debian VM running on ESXi with a Tesla T4 card mapped in as vGPU (GRID drivers, no full PCIe passthrough). Regular HW transcoding works fine in this setup.

The „preferred transcoding device“ does only show Auto thought, so I assume PMS itself somehow might not detect the card, the transcoding agent/ffmpeg however utilized it perfectly fine.

“vGPU” implementations are problematic / do not work.

When you pass the GPU through to the VM, it must be a physical passthrough of the raw device.

If PMS cannot see it on the PCIe bus, then the GPU will not work.

In ESXi, pass the entire GPU through to the VM.
YES, this means that ESXi will limit GPU usage to only one VM at a time.

Hey, thanks for your comment.

When you pass the GPU through to the VM, it must be a physical passthrough of the raw device.

This is not true, because …

If PMS cannot see it on the PCIe bus, then the GPU will not work.

… for the VM and all software on this VM, the vGPU looks like a regular PCIe device.

As I said, x246 HW transcoding works perfectly fine with this setup for quite a while. The problem for HEVC is rooted in a check that PMS seems to do before it shows the corresponding transcoder settings. I can only guess, but most likely PMS enumerates the GPU devices and checks their capabilities.

NVIDIA transcode, compute, etc. in containers works without device mappings (because that’s transparently handled by the specialized nvidia container runtime and the corresponding drivers). Because of that fact, ffmpeg was perfectly fine with transcoding my streams. However, without device mappings, PMS does not see the GPU and can’t perform the capability check.

Long story short, the issue was not related to vGPU at all. After mapping card0/1, renderD128/9 to the container, I was able to activate the new HEVC transcoding option and the transcoding itself works fine too.

Probelmatic? Can you be more specific? There’s nothing wrong with them, CUDA/NVENC etc work without issues. Or do you mean they don’t work with Plex since you’ve made a bad design decision?

Plex is hard checking for renderD128 which is just incredibly stupid. I’m using nvidia-container-toolkit and my HW transcoding works perfectly - except that I was missing the new HEVC option.

lxc.hook.pre-start: sh -c '[ ! -f /dev/nvidia0 ] && /usr/bin/nvidia-modprobe -c0 -u'
lxc.environment: NVIDIA_VISIBLE_DEVICES=all
lxc.environment: NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
lxc.hook.mount: /usr/share/lxc/hooks/nvidia

Had to add these two to make the setting available (again, HW transcoding works with the above config):

lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

Same situation for me and same fix (except that it’s renderD129 for me and I’m using Docker Compose instead of LXC containers).

They should use the proper NVIDIA API to check the NVENC capabilities instead of manually enumerating the devices.

We do use the Nvidia API.
There is NO EQUIVALENT API for Intel.

The link from /dev/dri tells us where to go.

Here’s my script for adding a GPU to a LXC

[chuck@lizum bin.2006]$ cat add-gpu 
#!/bin/bash
#
# Add both Intel and Nvidia GPU passthrough to LXD/LXC container

# Argument #1 is LXC container to add
if [ "$1" = "" ]; then
 echo "Error: missing container name" 
 exit 1
fi

# Container exist ?
if [ "$(lxc list | grep "$1")" == "" ]; then
  echo "Error: Unknown container name '$1'"
  exit 2
fi

# Make certain /dev/dri/renderD128 exists
if [ ! -e /dev/dri/renderD128 ]; then
  echo Error:  This host does not have hardware transcoding ability /dev/dri/renderD128
  echo "        Drivers installed?"
  exit 3
fi

Gid="$(stat -c %g /dev/dri/renderD128)"

# Add it (Both Intel and Nvidia)
lxc config device add "$1" GPUs gpu gid=$Gid

# Add Nvidia runtime 
lxc config set "$1" nvidia.driver.capabilities all
lxc config set "$1" nvidia.require.cuda true
lxc config set "$1" nvidia.runtime true

echo GPU configuration added to \'$1\'

# Restart
if [ "$(lxc list | grep -w "$1" | grep "RUNNING")" != "" ]; then
  echo Restarting $1
  lxc restart $1
fi

[chuck@lizum bin.2007]$ 

Notice:

  1. GPU config added for Nvidia API in case it’s an Nvidia GPU
  2. GID matched so the container is a member of the correct host group
  3. This is physical GPU passthrough. While intended not to be shared, the Nvidia host driver is more forgiving and does allow sharing.

I have a similar issue for Docker for Windows/WSL2 at HEVC Encoding options not showing up via Docker for Windows - Plex Media Server - Plex Forum

The problem is the same. The way GPUs are passed through these containers work perfectly fine for HW Transcoding, CUDA, and other GPU-related tasks. However, the HEVC options are simply not visible, not that HEVC transcoding itself is failing. I suspect, as others have mentioned, that this is due to a design limitation in Plex that only shows these options if the GPUs can be listed in Hardware Transcoding Device even though that is not the central requirement to do HW Transcoding.

Regardless, if this is required, can you share a similar solution for Docker for Windows/WSL2 to passthrough the GPU in such a way that it allows to be seen by the way Plex is checking. Nothing I saw online is helping because the standard way of passing it through (as shared in my link) simply works for other usecases.

I just checked in WSL2 and it seems like Windows exposes “a” GPU device to the WSL subsystem. Not sure, if this is the correct GPU, but it’s worth a try.

My docker compose config:

  plex:
    container_name: plex
    image: plexinc/pms-docker:beta
    restart: unless-stopped
    network_mode: host
    volumes:
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - /docker/plex/library:/config
      - /docker/plex/media:/plex:rshared,ro
      - /tmp:/transcode
    devices:
      - /dev/dri/card1:/dev/dri/card1
      - /dev/dri/renderD129:/dev/dri/renderD129
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu
                - compute
                - video

The relevant part in your case is:

devices:
  - /dev/dri/card1:/dev/dri/card1
  - /dev/dri/renderD129:/dev/dri/renderD129

In my WSL, the names are card0 and renderD128.

Edit: If you start the container from main Windows, you probably have to specify the CLSID of the GPU:

ALL Windows users:

Chris and I have been discussing this all morning.

We’d like to share info and then ask for your help with this Windows - WSL2 - Docker issue.

What we know:

  1. Plex HW transcoding / GPU discovery requires the GPU be visible in the kernel DRM layer (How rendering is done on Linux).

  2. PMS Docker is Ubuntu based

  3. WSL2 is a LL VM (Linux based) from Microsoft

  4. In order to make Docker-based HW transcoding work, we need to get the HW visible: Windows host → WSL2 VM → Docker container → PMS app.

To date, we have not been able to get the GPU into WSL2 LLVM

Our request is simple:

How do we get the GPU from the Windows host → the Docker container as a PHYSICAL device for the Nvidia drivers to identify and use it correctly ( render and cuda tone mapping )

Secondarily, this will have impact on other platforms. Since Windows is popular, it seems this is a good place to start

Hi @ChuckPa, I don’t have a concrete answer to this question, but want to share my observations:

  1. Docker HW transcoding already works perfectly fine on Debian and Windows WSL2 without physical device mapping!
  2. GPU discovery (which is used to conditionally show the HEVC option) does not work on Docker without mapping the physical cardX/renderDxxx devices (only tested on Debian) - Device dropdown always displays “Auto”, but HW transcoding still works and uses the correct GPU.

Edit: A prerequisite for all this is the NVidia Container Runtime.

This tells me that the Nvidia Drivers (and runtime) do not require a physical device to be mapped into the container.

You should probably get away with just refactoring the GPU discovery / capability check. A very easy way would be to start a dummy transcoding and check the result/status code from the Nvidia API to determine if the GPU has the required capabilities.

Checking the ffmpeg source code might be another way to find out how this tool is able to do HW transcoding without a device being mapped into the container.

I tried doing this, but it still does not work. But as you mentioned, it should not be relevant. nvidia-smi and HW Transcoding and CUDA all work with just adding the below to your compose. I suspect this is purely because of the check being run to make the options even appear, not the actual capability of the environment to run HEVC transcoding.

Since the OP is actually a completely different environment w ith the same issue, I also agree that the better course of action is to reevaluate the logic on showing the options.

@Zacherl

Thank you for detailing.

It’s starting to sound like the probe code (in PMS) and the actual transcoding code in Trancoder are using two different tests / sets of rules.

This as foundation, I’m going to start testing here –

  1. With physical /dev/dri nodes
  2. Without physical /dev/dri nodes

I’ll then circle back around and chat more with Chris.

Thanks!

PS: don’t you love convoluted, zero-comment, code? :scream_cat:

1 Like

ALL:

Chris has been working on improving Nvidia GPU detection.

He’s focused on Container environments.

For those who have some time to spare, we’d appreciate
your checking this out to see if GPU discovery now works

-AND-

You now have access to HEVC encoding.

Here is the DEB package file.

PMS HEVC discovery experimental (DEB)

We look forward to hearing your results

I use Docker for Windows/WSL2. The GPU Discovery is still failing with this version. I tried with below two compose options, neither worked. The difference is mapping the video card as a device, though HW Transcoding doesn’t even require that, the first compose works fine.

  plex:
    image: plexinc/pms-docker:plexpass
    container_name: plex
    environment:
      - PLEX_UID=${PUID}
      - PLEX_GID=${PGID}
      - UMASK=${UMASK}
      - TZ=${TZ}
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all
    volumes:
      - ${CONFDIR}/plex:/config
      - drivepool:/mnt/media:ro
    ports:
      - 32400:32400
      - 33400:33400
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    restart: unless-stopped
  plex:
    image: plexinc/pms-docker:plexpass
    container_name: plex
    environment:
      - PLEX_UID=${PUID}
      - PLEX_GID=${PGID}
      - UMASK=${UMASK}
      - TZ=${TZ}
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all
    volumes:
      - ${CONFDIR}/plex:/config
      - drivepool:/mnt/media:ro
    ports:
      - 32400:32400
      - 33400:33400
    devices:
      - /dev/dri/card0:/dev/dri/card0
      - /dev/dri/renderD128:/dev/dri/renderD128
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    restart: unless-stopped

@rg9400

What happens when you create a native installation with an override.conf so it looks for all the data files where your /config is ?

( Looking to see if there is a difference between the presence or absence of the additional docker layer )

Apologies, not entirely sure how to set this up. You mean inside the container? Or installing Plex natively in WSL2 (or natively in Windows?)

Installing natively in the WSL2 LLVM
(WSL2 is a low level VM)

I tried to do some research on this, but to be honest, I’m not super comfortable doing this in my environment. I am not sure how a native fresh install will impact my existing database running through Docker, but my instinct is to not risk it. I am usually happy to help test, but this one feels like it could be a bit dicey