Server Version#: 1.18.4.2171
I have 2 systems, that are overall very similar in HW specs, mainly CPU, that I am seeing greatly different transcode times when doing a sync to my IOS device, sync setting is to original quality.
System 2 when doing a sync, the GPU is showing 70-80% utilization for Intel GPU during the transcode part of the sync, and it completes much much faster than System 1. System 1 shows the Intel GPU at only 20-25% utilization.
System 1 - Name in logs is Plex-Server
Windows 10
CPU Core I7-8700K with Intel UHD 630 Graphics
Radeon RX Vega 64 8GB Dedicated GPU
32GB RAM
256GB NVME for OS
Various Seagate Iron Wolf 2-4TB spinners using Drive Pool for storing media.
I am forcing Plex to use the Intel GPU via Windows Display Settings, setting plextranscoder.exe to use Intel GPU
System 2 - Named Plex-VM in logs, VM running on ESX 6.7 using GPU Passthru from Hades Canyon Nuc, AMD GPU disabled, only using Intel HD630
Windows 10
CoreI7-8809G
4GB RAM
VM storage is NFS running on Synology DS918, Media files are also on same NAS. Network for everything is 1GB.
My expectation would be that System 1 would be slightly faster than system 2, not the other way around, it has higher passmark scores and is a more powerful CPU. I’m wondering why I would only see around 25% GPU utilization on system 1 compared to system-2, when both systems aren’t doing anything else expect Plex.
Some examples using the same source file within each test, but different source for each test.:
Test 1 TV Show X
5:03 Start 5:27 End - PLEX-SERVER = 24m
5:11 Start 5:30 End - Plex-VM GPU PT = 19m
Test 2 TV Show Z
5:39 Start 6:00 End - Plex-Server =21m
5:39 Start 5:45 End - Plex-VM = 5m
Plex-Server Media Server Logs_2019-12-31_19-28-52.zip (3.9 MB) Plex-VM Media Server Logs_2019-12-31_19-28-28.zip (3.2 MB)



