New PMS specs - P2200 or RTX4000?

I am about to upgrade my Plex Media Server. Until recently I have been running a rather basic box in my rack. It is dedicated tin, but not beefy (i3-8100 + RX460). This little gem has built purely to support my home cinema + a couple of transcodes. Most of my media is 4K remux.

I have a small but regular group of friends who are enjoying my media and I need more transcoding grunt. I have some components I can use and I will buy another GPU to help with H/W assisted decode/encode.

My upload is limited to 20mb/s - so I limit outgoing traffic to 4mpbs - so 4 current transcodes is all I am looking to support, however, they might all be 4K remux down to 720p

I have a spare 2920x lying about that I will use for a processor. For a GPU, should I pick up the P2200 or splurge on the RTX4000

Advice appreciated.

Ok, first things first, and I mean no disrespect, but, you’re doing it wrong.

If all of your media is truly 4K Remux, then it’s also all likely HDR instead of SDR. Since Plex has no Tone Mapping in its transcoding process, all of your friends will get a washed out image on their screen whether they have HDR Capable televisions or not.

You NEVER want to transcode 4K content. 4K HEVC is a PITA to transcode. While a video card for HW Transcoding will give you what you want, it’s far easier to just take the BluRays that are already accompanying your 4K UHD discs, rip them, and then pre-convert them to 720p at whatever bitrate you find acceptable. 720p files at an acceptable bitrate (say 2Mbit in x265 and 4Mbit in x264) are relatively small files when compared to your 4K Remuxes. They also will have the luxury of not transcoding before hand, saving you time, money, and electricity.

If you REALLY want to go the route you’re wishing to take, it wouldn’t really matter which of those 2 cards you chose. P2200 is more cost effective, and would be the better choice imo. (note: I edited the paragraph cause I didn’t realize you were talking about the other Quadro card, NVIDIA recently announced the next generation of RTX and I assumed that’s what you were talking about at first). Although if you played your cards right, you could spend even less money, on a GTX card and unlock it for unlimited transcodes see here: https://github.com/keylase/nvidia-patch/blob/master/win/README.md

For reference, I have a 1070Ti and have gone up to 13 HW Transcodes (all HEVC 1080p -> X264 1080p sessions via various browsers) in testing and it didn’t even break a sweat. So unless you also plan on a 3D Modelling/Rendering program or some deep computer learning tasks you wish to also speed up, getting the RTX4000 series seems a bit overkill simply for a PLEX server.

3 Likes

What Jason said.

See also @ Plex, 4k, transcoding, and you - aka the rules of 4k - a FAQ may or may not be helpful.

3 Likes

Also, try pulling the RX-460.

The i3-8100 can handle multiple 1080p transcodes, possibly outperforming the RX-460 (Here’s a report of the RX-580 limiting out a seven transcodes).

From i3-8100 datasheet:

Not sure if you meant to use that table, it’s showing decode streams, not encode streams. Also, I would steer clear of ATI/AMD GPU products personally when it comes to Hardware Transcoding, especially for the x264 codec (currently Plex’s only option), the image quality when compared to NVIDIA is terrible (there’s several youtube videos outlining a comparison of the 2 platforms), and even when comparing the speed at which it decodes and encodes for say handbrake, it fails miserably against similarly priced NVIDIA cards as well. That’s just my 2 cents. The AMD GPU Offerings do do pretty well image quality wise on x265 though, so, maybe when Plex implements 265 encoding I may change my mind.

Yes. To show that the 8100 handles decoding HEVC 8 & 10 bit. Also the performance. I don’t know if the 8100 can decode & encode 16 simultaneous streams, but it should easily handle 4 1080p streams (no idea about 4K).

Plex encodes everything to H.264 8-bit, so didn’t pull the encode table.

Makes sense, but decoding is “easy”, most of the time if there was no hardware support for decoding, you could still get by decoding x265 HEVC with the CPU and leave the GPU to the heavy lifting for encoding. But I take your point :slight_smile: I was more focused on OPs original question about transcoding the content from 4K to 720… for that, the encode part would be the most intensive imo.

Good point on decoding being easy compared to encoding.

Got curious and took another look at the datasheet. The encoding section does not have any information on expected performance.

However, the follow-on section, on Transcoding, does provide expected performance numbers of 12 to 18 1080p30 transcodes.

Only real way to know Plex’s performance for 4K -> 720p is for OP to try. But nice to know the expected numbers are well above requirements of 4 simultaneous transcodes.

1 Like

Thanks JasonNalley

I was going to launch into my reasoning for why spending money on hardware was a better solution than having different versions of the same content. But then I stopped, breathed, and stepped back to look at what yourself and @TeknoJunky have been preaching.

And you are right.

The best solution is to have a version of my content that is easy to transcode and share. I am part way through the process of converting all my content to h264 1080p and storing in a separate library just for sharing, and removing my 4K remux library from sharing completely.

Storage is cheap, the content is better and the hardware is under less stress.

2 Likes

Sorry guys, but I think there is some confusion, I don’t know if mine, but wanted to clear it up for anyone coming in later…

DEcoding HEVC is the hard part. (ENcoding hevc even harder)

decoding/encoding x264 is what cpu have been doing for years, that is the easy part.

the screenshot above “expected peformance- more than 16 simultaneous decode streams @ 1080p” does not even qualify which codec (multiple codecs are indicated in the chart).

Always remember x265/HEVC is HARD.
x264 is not.

1 Like

Tekno, this is exactly what we were saying. Decoding HEVC/x265 is easier than Encoding HEVC/x265, by quite a fair margin, it’s also easier to decode x265 than it is to encode x264. If I had no Hardware supported decoding, I could still decode with my CPU faster than my GPU could encode it… When comparing DEcoding only, HEVC is harder to decode than 264, that is very true, and it’s by quite a significant margin as well… However, when it comes to encoding, the task of encoding is always harder than the task of decoding. Even when comparing across codecs.

I am sorry, but I have to take exception to this assertion…

if this were the case, that decoding x265 is easier than encoding x264, then we would not have the thousands of posts and complaints about people trying to decode x265 blurays on cpu, when they were previously able to decode and encode 264 without issues on cpu.

case in point, all the posts where people have old dual cpu servers with 16-20+ cores, that start buffering as soon as 4k/hevc is attempted to decode. But work fine with plain old 1080p/x264 transcoding (not direct play).

I mean I have 3x dell r610 blade servers that were virtualization servers originally, that could not do a single 4k/hevc decode on bare metal cpu (no virtualization), with any consistency.

Decoding happens every time you watch a file. Load up an x265 video in VLC, VLC Decodes it and displays the imagery on your monitor, that takes slightly more CPU than decoding x264, but not nearly as much as encoding x264.

x265 No Hardware Decode:

x265 Hardware Decode turned on:

So decoding only takes ~3-5% Processor use, and when you encode with x264, even if you could somehow constrain it to only encode at 24fps, you’d still be using way more than 3-5% CPU. Hence, decoding HEVC takes less CPU than encoding 264.

we aren’t talking about vlc.

if you are running the current pms beta, you can actually enable HW decode and CPU encode, and see for yourself how much load encoding to x264 is.

then you can disable hw decode, and see how much more cpu hevc decode is.

Well, we’re talking about “Decoding” specifically… Decoding itself happens either when playing a file, or when transcoding a file. So playing actually shows you how much CPU decoding only uses. Use any media player, you’ll get the same result, it doesn’t have to be VLC if maybe you feel vlc is better suited to the task that something else. But honestly decoding is decoding, and decoding occurs during playback as well as transcoding.

I also cannot find separate check boxes for “Decode” and “Encode” in my plex settings, so I can’t perform the test you recommended. This is what I get on the transcode screen:

Need the recent beta Plex Media Server

Edit you already have it look close at the checks and text

So, if I read those right, you can’t get the data you want from it.

First Box Checked, second Box Unchecked = CPU is Encoding? (This is doubtful, considering the percentage I got, seems it’s actually just Decoding):
image

First Box Unchecked, Second Box Checked = CPU encoding AND decoding (This seems accurate):
image

Both Boxes Unchecked = CPU is encoding AND Decoding (Again, this seems accurate):
image

Both Boxes Checked = CPU is doing nothing, GPU is Encoding and Decoding (This also seems accurate):
image

Pretty sure their check boxes are not labeled properly or we can’t actually do what we are trying to do. I stand by my statement, decoding takes next to no power at all, as evidenced by simply playing back the file in any given media player. Hell even going x264 -> x265 and x265 -> x265 in handbrake has nearly identical frames per second encoding

x264 -> x265 1080p:
image

Same File
x265 -> x265 1080p:
image

Granted this is using NVENC, but if there was a significant difference between decoding one or the other, you’d see an FPS drop, but you don’t, because decoding the video in either scenario is relatively easy, encoding to either x264 or x265 is the difficult part. Decoding 265 is definitely easier than encoding 264, all day long. I can specifically point to VC-1 vs AVC as a use case when decoding, whenever something is VC-1, I can’t seem to transcode at more than 150-200fps, but when it’s AVC I can transcode at 200-250, that directly because of the decoding difference between them, at least that’s what it seems to be from everything I’ve examined.

If you have evidence to the contrary, I am happy to look at it, but decoding x265 is a relatively low cpu intensive job, I just wish I could make it so you only see the decode on the CPU and the Encode on the GPU, that would settle it once and for all, but the only way you can get the CPU to only decode, is via playback with hardware decoding off.

You are using gpu not cpu.

Yes, but if there was a significant difference in their decoding, it would show up in an FPS drop, as it does with VC-1. I can re-do it without using GPU, but the results will be the same, nearly no difference between the two.

x264 -> x265 CPU Only:
image

x265 -> x265 CPU Only (same file):
image

I think we are arguing about 2 different things.

I/we don’t care about FPS.

I/we care about how much CPU load it takes to decode or encode 265, which is orders of magnitude more than x264.

Hence plex own cpu recommendations for ~17000 passmark for 4k/hevc > 1080x264.


  • 4K HDR (50Mbps, 10-bit HEVC) file: 17000 PassMark score (being transcoded to 10Mbps 1080p)
  • 4K SDR (40Mbps, 8-bit HEVC) file: 12000 PassMark score (being transcoded to 10Mbps 1080p)
  • 1080p (10Mbps, H.264) file: 2000 PassMark score
  • 720p (4Mbps, H.264) file: 1500 PassMark score

it takes ~8x the cpu power to decode hevc > encode x264
vs
x264 to x264

Hence, my statement is that x265/hevc is wayyyyyy harder to both decode and encode, than x264.

Most people did not need a GPU to transcode before hevc became popular.

As far as the difference of GPU load between hevc and 264, I don’t know if that has been fully researched, other than the information @ https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding

I haven’t seen any detailed comparison/benchmark with nvidia vs intel igpu transcoding.

edit: also the bottleneck in both your comparisons above is the encoder.

try your conversion like this and see if the FPS changes.

hevc > x264
x264 > x264

in this case, the bottleneck will be the hevc decode.