Server Version#: 1.40.2.8395
Player Version#:
<If providing server logs please do NOT turn on verbose logging, only debug logging should be enabled>
Just wanted to see if this is normal behavior - just upgraded to the latest version of the server to try out the new Intel iGPU HDR tonemapping support on Windows, until now any 4K HDR material has just been getting transcoded via my RTX 4070.
Selecting the iGPU in the transcoder settings, interestingly enough it still slightly utilizes the 4070, and loads the CPU (12700k) to about ~35% according to task manager. Assuming the load would be taken off the CPU by switching back to the 4070 exclusively, it’s still reporting 30-40% CPU utilization when doing a tonemapped transcode. Standard SDR transcodes are also now utilizing the nvidia GPU, when I would expect this to be done exclusively via quicksync.
Interestingly, Plex dashboard’s CPU graph only shows it using 5% usage or so when it’s actually loaded much higher.
Am I experiencing some kind of bugged performance right now, or is this normal? I’m thinking it’s not given that people often talk about being able to do multiple tonemapped transcodes with ease via hardware acceleration. Any insight or advice that can be offered would be greatly appreciated.