How to install Nvidia HW transcoding support for Proxmox 6.2-1 LXC install

Server Version#: 4.35.1
Player Version#:

The steps That need to completed

there are list of things that need to be done because of nvidia’s lack of support for automation

Nvidia does not support loading of the correct directory without running the apps first witch causes problems in setting up for automation and LXC’s but nvidia and Proxmox have provided hook script to help automate this process

update proxmox to the latest version in shell prompt

apt-get update && apt-get upgrade -qqy

install nvidia prerequisites

apt-get install -qqy pve-headers-uname -r gcc make

install the latest nvidia driver from nvidia just so you know nouveau is opensource driver for nvidia that is not supported for hw Transcoding

run install script with the fallowing options --disable-nouveau --ui=none
“example”
bash NVIDIA-Linux-x86_64-440.82.run --disable-nouveau --ui=none

install presistenced

this is not auto installed you must excratact after nvidia driver is installed /usr/share/doc/NVIDIA_GLX-1.0/sample/nvidia-persistenced-init.tar.bz2

and run the install.sh

now you must add nvidia container runtime repo and key

Install NVidia Container Runtime

Nvidia container runtime i need for proxmox hooks to load with out crashing your container

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
apt-get update
apt-get install -qqy nvidia-container-runtime

now you must check to see if you have the fallowing dir

check nvidia-smi works check /dev/nvidia* for nvidia0 nvidia-modeset nvidia-uvm nvidia-uvm toolkit

you must have nvidia-uvm and uvm toolkit for your LXC to do hardware trans coding now i know your going to be missing some of those dir

i have put the required commands into my lxc config to start the automation

example of required check nvidia-smi works check /dev/nvidia* for nvidia0 nvidia-modeset nvidia-uvm nvidia-uvm toolkit

i know you will be missing some dir’s they will most likely modeset and uvm this is expected i have put the required commands into the lxc config

user must modify lxc config and add lines

#lxc.hook.pre-start: sh -c ‘[ ! -f /dev/nvidia-uvm ] && /usr/bin/nvidia-modprobe -c0 -u’
#lxc.environment: NVIDIA_VISIBLE_DEVICES=all
#lxc.environment: NVIDIA_DRIVER_CAPABILITIES=all
#lxc.hook.mount: /usr/share/lxc/hooks/nvidia
#lxc.hook.pre-start: sh -c ‘chown :100000 /dev/nvidia*’ # if your still having permission issues run/add this line to your config

lxc.hook.pre-start is for searching for required dev dirs and if not add them with correct permissions

lxc.environment is for loading all driver capabilities and support devices

so every time the server is restarted you must run the nvidia-smi tool and nvidia-modprobe -c0 -u to add the required dev dir’s this is normally done by the applications on the host machine but as we are running a lxc and they are isolated you need to have the preloaded witch is why i put them in the LXC config for automation so i can do a system restart and not have to remember to run command to get my container working again

this has all been tested with nvidia p2000 running Nvidia 440.82 i have been working on a script on my github link will be provided it is still in progress Saberwolf64 github but with the above should get you close if not in a working stated P.S. you need to install nvidia driver also in the LXC with the following option --no-kernel-module or the install will scream at ya and fail to install

As A new update either to Nvidia driver 455.28 or proxmox you do not need to install drivers on LXC they are now passed through to the container using hook scripts and new nvidia driver set.

2 Likes

Awesome guide! This seems easier than previous guides.

I’ve been using my Nvidia GPU with the following guide from @constiens

I’m curious what the differences are and what implementation is best to use? And does this container-runtime package also work on Geforce cards? This seems like a feature they’d only turn on for cards in the Quadro/Tesla line

To my knowledge it works for all offical drivers and is both supported by proxmox and nvidia as offical repos

Any issues using version Driver Version: 450.57??

I upgraded from 440.xx and the host Proxmox machine is able to install the drivers but when trying to install on Ubuntu 20.04 LXC it gets a seg fault and installation fails. There are barely any log details to go on.

Able to try to see if you get the same issue please!

EDIT: Seems to have been an issue with another library I was using for FFMPEG compilation. I am curious if anyone is having issues with PMS recognizing/using Driver Version 450.57. I still can’t get place to properly use my GPU. NVIDIA-SMI works fine and is found.

So I am probably going to rebuild a PC I have with proxmox and freenas. Was also hoping to get plex on as well. Your turtorial looks good and straight forward I just had some newbie questions.

  1. Does it matter if I have AMD or intel CPU?
  2. Would you be able to provide 2-3 GPU’s (Cheap / Expensive) that will do hardware transcoding in plex with AMD CPU (Ryzen 5 3400G) ?

Anything else I should read up on prior to starting this?

not to my current understanding but i have not updated my system in a long time once i get it running it dont upgrade it unless i need to

cpu does not matter

gpus dont matter unless you are trying to mix amd and nvidia you will run into problems i suggest you just run a p2000 card it can handle 7 concurrent 4k transcodes at a time. you will run into problems if you use subtitles as it will use your cpu to do the transcoding then and not the GPU’s

i currently run a intel i7 2700 with a p2000 can it can support all my needs including transcoding 4k on the fly make sure you are running PCI-e gen 3 motherboards to get everything out of the nvidia quadro p2000 they cost like 300$ used. take my discord info and if i am online i will help you getting it setup and running saberwolf#2114

@Sabrewolf
thanks for the quick reply. I will give more details as I do not want to spend too much and use what I currently have. So I currently have:
CPU: AMD Ryzen 3400G
MB: ASUS Prime B450m-A/CSM
Memory: G.Skill Aegis 16gb (2x8) DDR4 3000 Kit
Case: A really old NZXT case, but I am too cheap to change it and it works.

Memory was fine when I was doing a bare metal Freenas, but I am ok scrapping this memory for more, maybe 64MB as the MB supports 128GB. Since this will be a proxmox server I plan on running a few other things so I am thinking I need to spend more $$$ for RAM.

I finally found Novaspirit youtube channel who showed a ThinkStation Nvidia P400 HP. That card can do 1 or 2 4k and I think it was 4 or so 1080p transcoding and I would be ok with this as it is only me who would be using this thing. The only negative is the video was a bare metal install, not what I want to do. Although I may just buy the card now and figure out later kind of thing.

look at this website for support matrix and compare the p400 and p2000/p2200
i would stick with at least 32 to 64 Gigs of memory i am running proxmox on 32 gigs of 3000 corsair vengeance LPX memory i run a zfs array and i run 5 other CT’s all at the same time no issues

1 Like

I just used this guide step by step and it worked perfectly on the first try. Thank you @Saberwolf!!

I also ran your script on your Github page, and although it didn’t complete fully it did almost all the steps needed on its own. Again, thank you! Saved me hours of work I’m sure.

Hi!

I can´t figure out the “user must modify lxc config and add lines” part of your excellent guide. Where do I place those lxc.hook.pre-start, lxc environment etc lines? Do I copy paste them into my 104.conf (my Plex lxc) or how does it work?

Im a bit of a proxmox beginner so a bit of step by step help would be appreciated :slight_smile:

Thanks!

let me know where it is broken so i can try and fix it again

so there is a .conf file under /etc/pve/lxc/*.conf they hold the config info for your LXC CT

they start 100 and go up usually there has been some changes you only need to install nvidia driver on proxmox and add the runtime and tool kits then set up LXC ct as an unprivileged Container with the hook.pre-start you will no longer need to do the second install of nvidia driver in LXC CT container

you must place them as you see at the end of your config

It gets to the point where it’s trying to edit the .conf file and on my machine it did create the container ok, but my container was 107.conf and your script was looking for 100.conf and that caused a loop where it just kept showing 100 something on the screen over and over. I control C to kill it, then made the final edits by hand and everything worked perfectly.

oh i know what i missed

the script var for the CTID was missing some coding test it out from start of #create LXC to end of file and see if it is still doing same thing. would be a big help

the error was when i was starting the LXC i run a bit of code to test for a ip state in LXC so it can update it was using my testing code instead of the VAR code bit

It looks like this guide really only works for unprivileged containers. Unfortunately, I use a SMB share to store my data and SMB shares are easier to work with in privileged containers. Any hints on how to get this working on a privileged container?

Also, thanks for what you’ve done so far. This has been working flawlessly for I don’t know how long. It was the pesky 6.2 update that screwed it up for me.

yes mount the share in proxmox and pass it into LXC as a mount point

you could also create an NFS share your windows installs would have to install a few more apps and setting up share are more work but the linux side it a lot easier sound like you have a NAS

check to see if you can Setup an NFS share and host it that way would make thing eazy.

for me i pass my ZFS pool to my LXC and it has direct access to media because my ZFS is on my proxmox install i just have to pass the mount in my config.

here is a copy of what my config looks like

arch: amd64
hostname: Plex
memory: 4096
mp0: /media/store,mp=/media,mountoptions=nosuid,size=28000G
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=56:60:5D:6E:BF:C0,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-101-disk-1,size=88G
swap: 1024
unprivileged: 1
lxc.hook.pre-start: sh -c ‘[ ! -f /dev/nvidia-uvm ] && /usr/bin/nvidia-modprobe -c0 -u’
lxc.environment: NVIDIA_VISIBLE_DEVICES=all
lxc.environment: NVIDIA_DRIVER_CAPABILITIES=all
lxc.hook.mount: /usr/share/lxc/hooks/nvidia
lxc.hook.pre-start: sh -c ‘chown :100000 /dev/nvidia*’
lxc.hook.mount: sh -c ‘ln -fs $(readlink /etc/localtime) ${LXC_ROOTFS_MOUNT}/etc/localtime’

Thanks again for your help so far. I’ve gone through the process of converting my privileged container to a non privileged container. I’m still unable to get the container to start with the lxc.hook.prestart line in the config. I’ll also add that I’m using driver 455.28 When my config looks like this:

lxc.environment: NVIDIA_VISIBLE_DEVICES=all
lxc.environment: NVIDIA_DRIVER_CAPABILITIES=all
lxc.hook.mount: /usr/share/lxc/hooks/nvidia
lxc.hook.pre-start: sh -c ‘chown :100000 /dev/nvidia*’
lxc.hook.mount: sh -c ‘ln -fs (readlink /etc/localtime) {LXC_ROOTFS_MOUNT}/etc/localtime

The system boots. but when I add the following line:

lxc.hook.pre-start: sh -c ‘[ ! -f /dev/nvidia-uvm ] && /usr/bin/nvidia-modprobe -c0 -u’

The boot fails . Here’s my full config:

arch: amd64
cores: 8
hostname: plex
memory: 16000
mp0: /mnt/pve/FreeNAS,mp=/media/freeNAS
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=B2:C2:41:AE:65:B0,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-100-disk-0,size=200G
swap: 512
unprivileged: 1
lxc.hook.pre-start: sh -c ‘[ ! -f /dev/nvidia-uvm ] && /usr/bin/nvidia-modprobe -c0 -u’
lxc.environment: NVIDIA_VISIBLE_DEVICES=all
lxc.environment: NVIDIA_DRIVER_CAPABILITIES=all
lxc.hook.mount: /usr/share/lxc/hooks/nvidia
lxc.hook.pre-start: sh -c 'chown :100000 /dev/nvidia*'