This is a carryover of a feature request / discussion in this thread:
Currently, when viewing HDR content on a SDR display via Plex the picture is washed out. This is no fault of Plex; SDR TVs simply can’t display the requested color depth and as a result the entire picture loses vibrancy.
The best current option I know of to “properly” view HDR content on SDR displays in realtime is the MadVR plugin along with a media player such as MPC-HC. MadVR seems to do a great job of tonemapping HDR content (10bit color depth) down to SDR (8bit color depth).
It would be great if a similar feature could be implemented within Plex, that way I can avoid using a different media player whenever I am viewing any of my UHD movies.
Just read through the last thread. I’m very much looking forward to some sort of tone mapping conversion when going from 10-bit to 8-bit. I just started viewing 4K content and had never used the optimize feature before. Works great for non-HDR screens except for the tone mapping issue.
One question I’m curious about… would/could tone mapping during transcode require an additional increase in processing power? My current setup already takes longer than the movie’s duration to transcode.
You do realise that the standard for HDR to SDR colour down conversion hasn’t even been written yet. It might take a while, there are a lot of arguments happening.
@Stephen3001 said:
You do realise that the standard for HDR to SDR colour down conversion hasn’t even been written yet. It might take a while, there are a lot of arguments happening.
It doesn’t need to follow a standard, at least for a quick and dirty temporary solution, right? The start (HDR) and end (SDR) are following their standards, the conversion just isn’t agreed upon. So, they can convert however they want to simply reach an end SDR version which is vastly more acceptable than what we get now. Then if they want to move to the standard conversion algorithm later they can. Understanding it may be more work altogether (doing it twice) but as it stands the solution is wrecking the media… its no solution at all.
@EricGRIT09 said:
It doesn’t need to follow a standard, at least for a quick and dirty temporary solution, right? The start (HDR) and end (SDR) are following their standards, the conversion just isn’t agreed upon. So, they can convert however they want to simply reach an end SDR version which is vastly more acceptable than what we get now. Then if they want to move to the standard conversion algorithm later they can. Understanding it may be more work altogether (doing it twice) but as it stands the solution is wrecking the media… its no solution at all.
The problem with that you will run into the same problems that the standards people are having. Remapping the colours correctly so an accurate representation can happen. The complaints will start with, ‘Why isn’t my red red’, ‘Why is the sky not blue?’.
There are some complicated maths involved to do this, just churning out a quick and dirty solution would probably be just as bad a picture as you have now.
@EricGRIT09 said:
It doesn’t need to follow a standard, at least for a quick and dirty temporary solution, right? The start (HDR) and end (SDR) are following their standards, the conversion just isn’t agreed upon. So, they can convert however they want to simply reach an end SDR version which is vastly more acceptable than what we get now. Then if they want to move to the standard conversion algorithm later they can. Understanding it may be more work altogether (doing it twice) but as it stands the solution is wrecking the media… its no solution at all.
The problem with that you will run into the same problems that the standards people are having. Remapping the colours correctly so an accurate representation can happen. The complaints will start with, ‘Why isn’t my red red’, ‘Why is the sky not blue?’.
There are some complicated maths involved to do this, just churning out a quick and dirty solution would probably be just as bad a picture as you have now.
I disagree - a quick and dirty solution/conversion will actually convert the color space and get close. Right now we are losing 90% of the color data so it is absolutely terrible. Short term it doesn’t need to be accurate, just close. Its absolutely possible to convert the color spaces currently - the question is whether or not Plex devs are going to spend the time to do so before a standard comes along (is it worth it?).
The problem is probably the fact that doing this in ffmpeg requires filters which I believe only run on one CPU core which would make it close to impossible to use for real time use. I don’t know this for a fact as I haven’t tested it to see what kind of fps this could do.
Just a couple of weeks ago I saw some correspondence about BT.2390 and mapping SDR and HDR within the spec and how different monitors were handling the standard (which they weren’t).
This type of thing for the near future is going to best be handled using whatever method is available.
Right now it can be handled in ffmpeg but it’s slow. Imagine having to decode H.265 then running it through CPU filter(S) (single core) then back to encode.
The present solutions could certainly be used to create “Optimized” versions (2nd copy) but probably not for real-time use.
What we really need is the ability to have the client do the tone mapping!
I am no expert on the topic, but do any of the Hardware Acceleration chips be it server side in newer CPU/GPUs or client side in set top boxes that handle HEVC in hardware have provisions to do tone mapping or do they strictly decode and encode HEVC frames? It seems they must because you can take a UHD Blu-ray player with HDR disc and play it on a non HDR 4k panel and the colors look correct? right
Yes, right now the biggest challenge seems to be the amount of compute resources ffmpeg needs to do this. From that article:
It makes sense to tune encoding speed to available CPU resources. The tone-mapping algorithm place 100% load on a single CPU core. Null output of our tone-mapping setup results in an average playback speed of about 4.3 frames per second on an AMD Ryzen 7 1800X CPU running at 3.8 GHz. This leaves 7 cores open for encoding. Choosing an encoding speed which would result in a general FPS rate of or less than 4.3 frames per second would be the optimal choice.
Hopefully the transcoding wizards can find a hardware-assisted realtime approach to the problem, as this functionality is really important and really needed in PMS.
Another option might be to allow the Optimize feature to create offline SDR tone-mapped versions of 4K HDR content that get used by PMS when playing to non-HDR capable clients…
@cayars I think the quote that @millercentral posted from the article from @Afullmark confirms what you said about ffmpeg only using 1 CPU core when using the filters.