Lost hardware acceleration

Hi,

I’ve been scratching my head for the best part of the day trying to understand why I can’t seem to enable GPU HW transcode anymore.

It was working fine until a month or so ago (last I checked) and it got disabled (with an update?) since then.

I’m not seeing the setting in Settings > Transcoder anymore. It’s like plex is not aware of/able to access the GPU anymore. I’ve reinstalled the nvidia driver, rebooted a couple of times, logged-out and back in.

Any help would be appreciated !
Thanks :pray:

Specs :

  • Lifetime Plex Pass
  • Ubuntu 18.04.2 with HWE, up-to-date (5.3.0-51-generic #44~18.04.2-Ubuntu)
  • Plex Version 1.19.3.2793 (latest plex pass downloads)
  • CPU Intel E3-1270 v5 (QuickSync not supported)
  • GPU Nvidia M2000, latest driver : Version: 440.59 (nvidia-smi outputs correctly)
  • Hauppauge WinTV-quadHD (latest linux-hwe-mediatree & linux-firmware-hauppauge installed from the ppa)

Do you have DEBUG logs (not VERBOSE) which capture the start of a playback session which should utilize the Nvidia ?

Thanks for the reply ! Here is the log excerpt for a software (CPU) transcode :
plex-log-transcode-session.log (214.0 KB)

2 days of tinkering latter…
Still could not figure out why HW accel was not showing up in my settings with a native install so I went the docker route.

I used linuxserver/plex & nvidia-docker with --gpus=all and --device=/dev/dvb (passing host DVR tuner to the container) at creation and that did the trick, HW transcoding is back and running !

Please submit the full server DEBUG (not VERBOSE) log ZIP file which captures the start of playback

The transcoder session log is not what’s needed.

Sorry for the delay in responding; I’ve been swamped.

No worries ! Here they are.
Plex Media Server Logs_2020-05-17_16-49-16.zip (2.9 MB)

Thanks, I can see it now.

May 17, 2020 16:48:44.704 [0xe4af8b40] DEBUG - Found session GUID of 4x8qeemzl4xow3fjb3rkepe2 in session start.
May 17, 2020 16:48:44.704 [0xe4af8b40] DEBUG - Cleaning directory for session 4x8qeemzl4xow3fjb3rkepe2 ()
May 17, 2020 16:48:44.704 [0xe4af8b40] DEBUG - Starting a transcode session 4x8qeemzl4xow3fjb3rkepe2 at offset -1.0 (state=3)
May 17, 2020 16:48:44.705 [0xe4af8b40] DEBUG - TPU: hardware transcoding: enabled, but no hardware decode accelerator found
May 17, 2020 16:48:44.705 [0xe4af8b40] INFO - CodecManager: starting EAE at "/tmp/pms-ba85e9e6-4144-4deb-bf27-2cbaf66320e1/EasyAudioEncoder"
May 17, 2020 16:48:44.705 [0xe4af8b40] DEBUG - Job running: '/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Codecs/EasyAudioEncoder-652-linux-x86/EasyAudioEncoder/EasyAudioEncoder'
May 17, 2020 16:48:44.705 [0xe4af8b40] DEBUG - Jobs: Starting child process with pid 4035
May 17, 2020 16:48:44.706 [0xe4af8b40] DEBUG - [Universal] Using local file path instead of URL: /gordon/media/movies/Star.Wars.Episode.IV.A.New.Hope.1977.2160p.BluRay.REMUX.HEVC.TrueHD.7.1.Atmos-FGT/Star.Wars.Episode.IV.A.New.Hope.1977.2160p.BluRay.REMUX.HEVC.TrueHD.7.1.Atmos-FGT.mkv
May 17, 2020 16:48:44.706 [0xe4af8b40] DEBUG - TPU: hardware transcoding: zero-copy support not present
May 17, 2020 16:48:44.706 [0xe4af8b40] DEBUG - TPU: hardware transcoding: final decoder: , final encoder: 

I see you have a ton of ethernet adapters.

Is this a VM or a container? (the docker bridge is clearly identified)

May 17, 2020 16:47:40.576 [0xf5491700] DEBUG - Detected primary interface: 192.168.0.155
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG - Network interfaces:
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 1 lo (127.0.0.1) (loopback: 1)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 3 enp3s0 (192.168.0.155) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 4 enp3s0d1 (10.0.0.2) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 5 br-60a6664c3013 (172.19.0.1) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 6 docker0 (172.17.0.1) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 158 br-4a636ca5a94b (172.20.0.1) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 159 br-110d10d6457c (172.30.0.1) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 160 br-5d09f6ea2474 (172.40.0.1) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 1 lo (::1) (loopback: 1)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 3 enp3s0 (2a01:e0a:20b:94a0:202:c9ff:fed2:325c) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 3 enp3s0 (fe80::202:c9ff:fed2:325c%enp3s0) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 4 enp3s0d1 (fe80::202:c9ff:fed2:325d%enp3s0d1) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 6 docker0 (fe80::42:d1ff:fef9:5faa%docker0) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 158 br-4a636ca5a94b (fe80::42:dcff:fe6c:654c%br-4a636ca5a94b) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 159 br-110d10d6457c (fe80::42:12ff:fe91:9f31%br-110d10d6457c) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 160 br-5d09f6ea2474 (fe80::42:2cff:fe86:cdfe%br-5d09f6ea2474) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 162 veth8a31774 (fe80::9824:31ff:fefe:a050%veth8a31774) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 164 vethbd93261 (fe80::ecf8:61ff:fed2:be59%vethbd93261) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 166 veth386033b (fe80::74ee:50ff:fead:b417%veth386033b) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 168 veth49d2b8a (fe80::c000:deff:fefb:4869%veth49d2b8a) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 170 veth38db229 (fe80::a024:8bff:fec0:59ac%veth38db229) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 172 veth3f1b9ed (fe80::5499:96ff:fe52:82e7%veth3f1b9ed) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 174 veth9393f65 (fe80::285a:58ff:feb1:cf9c%veth9393f65) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 176 veth547fa2a (fe80::7ca8:e4ff:fee7:da4%veth547fa2a) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 178 veth8cee418 (fe80::f49d:b8ff:fe12:32e0%veth8cee418) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 180 vethe10234c (fe80::7442:beff:feb2:5d0f%vethe10234c) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 182 vethed4a602 (fe80::301f:6dff:fe47:3737%vethed4a602) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 184 veth6ac5e69 (fe80::8096:c4ff:fe52:813%veth6ac5e69) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 186 vethbc772bc (fe80::80ee:62ff:fe24:a59e%vethbc772bc) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG -  * 188 vethe7d5f5f (fe80::46b:6eff:fe33:3a4e%vethe7d5f5f) (loopback: 0)
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG - Creating NetworkServices singleton.
May 17, 2020 16:47:40.576 [0xf5491700] DEBUG - NetworkServices: Initializing...
``

Strange that it says

TPU: hardware transcoding: enabled

When i don’t see any of the related checkboxes in the settings.

There are two physical NICs on the machine (1 single port onboard & 1 dual port PCI 10GB) and yes I do run docker and a number containers on the machine. All the br-xxx and vethxxx are virtual interfaces and bridges created by the docker process for non host-networked containers.

As I said, running PMS in a container with NVIDIA Container Toolkit did solve the issue so I’m happy as it is. Does not explain why it does not work when using the natively installed deb package so I’m happy to help investigate and solve the issue but if you consider it’s somewhat critical.

Settings - Server - Transcoder - Show Advanced

Yup, that was enabled, checked everything against the documentation but to no avail. No matter what I tried, the setting was nowhere to be found.

As I said it worked in the past with the same config (PMS from deb from the repo) and stopped working at some point. So it’s probably some kind of edge-case or a PEBCAK I’m missing. Anyhow don’t worry, since I figured how to make it work another way I don’t want to take any more of your time. Thanks for the help !

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.