Hi everyone, I’m running Plex in Docker on a Debian Trixie (testing) host with an NVIDIA Quadro P2000 GPU (driver 550.163.01, CUDA 12.4). Jellyfin in Docker uses the GPU for hardware acceleration just fine, and the official NVIDIA test container (nvidia/cuda) also sees the P2000 via nvidia-smi. But Plex refuses to use NVENC and always falls back to software decoding/transcoding.Setup details:
-
Host: Debian Trixie (bookworm-based testing branch)
-
NVIDIA driver: 550.163.01 (installed via Debian packages/nvidia-driver)
-
NVIDIA Container Toolkit: 1.18.2 (configured with nvidia-ctk runtime configure --runtime=docker)
-
Docker: Recent version, /etc/docker/daemon.json has:
json
{ "data-root": "/opt/docker", "default-runtime": "nvidia", "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } } -
Plex image: Tried both plexinc/pms-docker:latest (original) and lscr.io/linuxserver/plex:latest (current) — same issue on both.
-
Plex Pass: Active (HW transcoding options enabled in settings: “Use hardware acceleration when available” + advanced encoding checked).
docker-compose.yml (current, with linuxserver/plex):
yaml
services:
plex:
image: lscr.io/linuxserver/plex:latest
container_name: plex
runtime: nvidia
environment:
- PUID=XXXX
- PGID=XXXX
- TZ=Etc/UTC
- VERSION=docker
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
network_mode: host
volumes:
- plex_config:/config
- /path/to/media:/data
- /path/to/transcode:/transcode
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped
volumes:
plex_config:
What works inside the Plex container:
-
docker exec -it plex nvidia-smi → Shows the P2000 correctly (fan/temp/power/etc.).
-
GPU is visible and toolkit is hooking in (docker inspect plex | grep Runtime → “Runtime”: “nvidia”).
What fails (from Plex console/logs during forced transcode):
Transcoder: session ... indicated fallback to software decoding
Cannot load libcuda.so.1
Could not dynamically load CUDA
- No “(hw)” indicator in Dashboard; always CPU software transcode.
Key diagnostics:
-
Inside container: find / -name libcuda.so* → Libraries are mounted at /usr/lib/x86_64-linux-gnu/nvidia/current/libcuda.so.1 (and .so.550.163.01, etc.) — the CDI mode subdir.
-
ldd /usr/lib/plexmediaserver/Plex\ Transcoder → No NVIDIA libs linked/resolved at all (only Plex’s own bundled ones like libavcodec, libva, etc.).
-
On host: find /usr -name libcuda.so.1 → Confirms /usr/lib/x86_64-linux-gnu/nvidia/current/libcuda.so.1.
Attempts so far:
-
Switched from plexinc to linuxserver/plex (community-recommended for better NVIDIA handling).
-
Added explicit bind-mount for the subdir: - /usr/lib/x86_64-linux-gnu/nvidia/current:/usr/lib/x86_64-linux-gnu/nvidia/current:ro → Caused container startup failure:
failed to create link ... /gbm/nvidia-drm_gbm.so: failed to create directory: mkdir ... /gbm: read-only file system(CDI prestart hook trying to symlink into the container’s overlayfs, which is read-only during init.)
-
Other tweaks: Precise capabilities, LD_LIBRARY_PATH env var suggestions, etc. — no change.
From what I’ve read (GitHub nvidia-container-toolkit #1456, various Plex/Reddit threads), this seems tied to CDI mode (default in toolkit 1.18+) mounting libs in /nvidia/current/ subdir, which Plex’s custom FFmpeg/dlopen doesn’t search properly. Legacy mode might fix it by mounting to standard paths.Questions:
-
Has anyone got Plex HW transcoding working with toolkit 1.18+ in CDI mode on recent Debian/NVIDIA setups? Or is legacy mode the only reliable way?
-
Any other env/mount hacks to make Plex find libcuda.so.1 in the /nvidia/current/ path without breaking startup?
-
Would switching to a different image (e.g., binhex/arch-plex) help, or is this a Plex binary/toolkit mismatch?
Happy to provide more logs/output if needed. Thanks for any pointers!