My curiosity lies in how Plex Server for Windows handles core/thread utilization…
I’m running my Plex server off an x3650M5 IBM/Lenovo dual socket E5-2630v3 & wondered, how good of job does PMS for Windows does at multi-core/multi-thread utilization? I can easily see in Performance Monitor within Server 2012R2 that it appears each stream currently running appears to get a dedicated process assigned to it, but does that process get assigned to utilize as many cores/threads available to it?
This question was spurred by me re-downloading the seasons of Game of Thrones(re-watching while building up to S06) & tried to find the absolute highest quality versions possible. I was able to find BluRay 1080p w/ Dolby Atmos TrueHD 7.2(I do have a 5.1 Klipsch home theater setup) & each 1 hour episode is about 4GB apiece.
Went to watch the first episode & got some odd artifacting, which did go away after a handful of seconds, but got me thinking, so checked my servers’ utilization & wholly hell, that single stream, being transcoded, was hitting 20% system utilization on a dual socket 8 core per CPU, 16 threaded server! Haven’t seen a single stream, even in 4K, ever pound my server in that way.
So after all that, my question boils down to, when transcoding, does PMS for Windows actually utilize as many cores available for that stream or is each stream forced to use a single core/thread for transcoding at a time? Could be part of the problem I experienced, as while I have 16 physical cores(32 threads), they are clocked at 2.4Ghz with a Turbo Boost of 3.2Ghz, so not THE mostly beastly clocked cores possible.
Hope that makes sense & thanks all for any input!
@cwickstylz said:
My curiosity lies in how Plex Server for Windows handles core/thread utilization…
I’m running my Plex server off an x3650M5 IBM/Lenovo dual socket E5-2630v3 & wondered, how good of job does PMS for Windows does at multi-core/multi-thread utilization? I can easily see in Performance Monitor within Server 2012R2 that it appears each stream currently running appears to get a dedicated process assigned to it, but does that process get assigned to utilize as many cores/threads available to it?
This question was spurred by me re-downloading the seasons of Game of Thrones(re-watching while building up to S06) & tried to find the absolute highest quality versions possible. I was able to find BluRay 1080p w/ Dolby Atmos TrueHD 7.2(I do have a 5.1 Klipsch home theater setup) & each 1 hour episode is about 4GB apiece.
Went to watch the first episode & got some odd artifacting, which did go away after a handful of seconds, but got me thinking, so checked my servers’ utilization & wholly hell, that single stream, being transcoded, was hitting 20% system utilization on a dual socket 8 core per CPU, 16 threaded server! Haven’t seen a single stream, even in 4K, ever pound my server in that way.
So after all that, my question boils down to, when transcoding, does PMS for Windows actually utilize as many cores available for that stream or is each stream forced to use a single core/thread for transcoding at a time? Could be part of the problem I experienced, as while I have 16 physical cores(32 threads), they are clocked at 2.4Ghz with a Turbo Boost of 3.2Ghz, so not THE mostly beastly clocked cores possible.
Hope that makes sense & thanks all for any input!
32bit app so no it doesn’t utilize all that’s possible on Windows.
So while it is indeed a 32-bit application, doesn’t that mostly relate to the amount of memory the process can utilize? Knowing it is 32-bit, though, probably does mean that each transcode will only use a single thread, so might be why my system choked a bit on those 4GB HEVC 1080p TrueHD streams… Thanks!