Server Version#: 4.35.1
Player Version#:
The steps That need to completed
there are list of things that need to be done because of nvidia’s lack of support for automation
Nvidia does not support loading of the correct directory without running the apps first witch causes problems in setting up for automation and LXC’s but nvidia and Proxmox have provided hook script to help automate this process
update proxmox to the latest version in shell prompt
apt-get update && apt-get upgrade -qqy
install nvidia prerequisites
apt-get install -qqy pve-headers-uname -r
gcc make
install the latest nvidia driver from nvidia just so you know nouveau is opensource driver for nvidia that is not supported for hw Transcoding
run install script with the fallowing options --disable-nouveau --ui=none
“example”
bash NVIDIA-Linux-x86_64-440.82.run --disable-nouveau --ui=none
install presistenced
this is not auto installed you must excratact after nvidia driver is installed /usr/share/doc/NVIDIA_GLX-1.0/sample/nvidia-persistenced-init.tar.bz2
and run the install.sh
now you must add nvidia container runtime repo and key
Install NVidia Container Runtime
Nvidia container runtime i need for proxmox hooks to load with out crashing your container
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
apt-get update
apt-get install -qqy nvidia-container-runtime
now you must check to see if you have the fallowing dir
check nvidia-smi works check /dev/nvidia* for nvidia0 nvidia-modeset nvidia-uvm nvidia-uvm toolkit
you must have nvidia-uvm and uvm toolkit for your LXC to do hardware trans coding now i know your going to be missing some of those dir
i have put the required commands into my lxc config to start the automation
example of required check nvidia-smi works check /dev/nvidia* for nvidia0 nvidia-modeset nvidia-uvm nvidia-uvm toolkit
i know you will be missing some dir’s they will most likely modeset and uvm this is expected i have put the required commands into the lxc config
user must modify lxc config and add lines
#lxc.hook.pre-start: sh -c ‘[ ! -f /dev/nvidia-uvm ] && /usr/bin/nvidia-modprobe -c0 -u’
#lxc.environment: NVIDIA_VISIBLE_DEVICES=all
#lxc.environment: NVIDIA_DRIVER_CAPABILITIES=all
#lxc.hook.mount: /usr/share/lxc/hooks/nvidia
#lxc.hook.pre-start: sh -c ‘chown :100000 /dev/nvidia*’ # if your still having permission issues run/add this line to your config
lxc.hook.pre-start is for searching for required dev dirs and if not add them with correct permissions
lxc.environment is for loading all driver capabilities and support devices
so every time the server is restarted you must run the nvidia-smi tool and nvidia-modprobe -c0 -u to add the required dev dir’s this is normally done by the applications on the host machine but as we are running a lxc and they are isolated you need to have the preloaded witch is why i put them in the LXC config for automation so i can do a system restart and not have to remember to run command to get my container working again
this has all been tested with nvidia p2000 running Nvidia 440.82 i have been working on a script on my github link will be provided it is still in progress Saberwolf64 github but with the above should get you close if not in a working stated P.S. you need to install nvidia driver also in the LXC with the following option --no-kernel-module or the install will scream at ya and fail to install
As A new update either to Nvidia driver 455.28 or proxmox you do not need to install drivers on LXC they are now passed through to the container using hook scripts and new nvidia driver set.