Plex VM via ESXi having high CPU with transcodes

Server Version#: Ubuntu Server 18.04.3
Player Version#: 1.18.1.1973

Hello all,

I’m trying to figure out the best way to optimize transcoding on my plex server. I’m searched both google and these forums and there seems to be mixed information. Setup below:

ESXi 6.7.0 Update 2 (hyperthreading re-enabled despite the spectre/meltdown updates to get better performance [and the management is locked down outbound, nothing allowed inbound to the MGMT interface]), 2xE5-2630 Xeons (6 cores, 12 threads each, 24 threads total), 64gb memory, LSI raid controller running 3x3TB in RAID5.

I am seeing that transcoding a single stream is pushing my CPU usage to 80% (in both ‘top’ and via the plex dashboard). Changing the transcoding settings from any of the options (i.e. automatic, make my CPU hurt, etc.) and changing the background transcoding for x264 from the fastest to slowest settings seems to make little difference.

The way I built out the VM is I assigned it 8 threads total spread over 4 sockets, originally. I changed it to 8 threads spread over 2 sockets instead, as I really don’t know why I set it to 4 sockets originally (sounds stupid now). After changing it, it still jumps to 80-90% for the first 30 seconds, and then drops to about 20%.

Now time for the question: I’m going to stress test the number of simultaneous streams I can push my server to, but does it make sense to instead upgrade my CPUs to Xeons (such as the E5-XXXX v2/v3, which may support QVS), or instead by a GPU to help transcode (maybe something like a quadro)? Also, has anyone else experienced the issues with transcoding in a VM vs. a desktop/NAS solution? One of my friends has a quad atom NAS that handles transcoding better, and I’m wondering if it’s a VMWare limitation, rather than a hardware problem.

Any insight would be greatly appreciated.

Thanks!

In any VM implementation, you will always have fundamental loss inherent to the VM itself.

Linux systems, with their inherent flexibility at creating mount points for media, and the universal tar ball file (.tar), essentially, imho, negates the need for virtualization UNLESS the requirement is to stand up a fully distinct server.

The benefit of running native is QSV, if available on the CPU, is automatically provided by the OS whereas in a VM, manual passthrough must be performed. Some VM hypervisors do not make that easy or even possible.

The CPU you referenced. https://ark.intel.com/content/www/us/en/ark/products/64593/intel-xeon-processor-e5-2630-15m-cache-2-30-ghz-7-20-gt-s-intel-qpi.html

Has no QSV but I assume you know this. It will always be full CPU slugger.

Thanks for the reply, and sorry for the delayed response. It seems that changing it from 4 sockets with 2 cores per to 2 sockets with 4 cores per, has alleviated the majority of the high CPU usage on a single transcode. I stress tested and was able to run about 6-7 1080p streams simultaneously without any issues. I have now compressed my entire /var/lib/plexmediaserver/ directory and migrated it onto a mac mini that’s been collecting dust and going to locally test to see if there is any better performance on that, as opposed to the VM.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.