Best processor and other questions

ECC Ram is way over rated these days for most environments (but not all). We aren’t talking about encrypted files or scientific files where a bit matters a great deal. These are video files that can be replaced if one happens to go bad. You could easily use a normal PC for 10 years and never benefit once from the ECC memory in a useful way (in our environments). We write data to disk once then read it back many times generally speaking. If we get a memory error in a transcode then the user might get an artifact or something briefly. Memory is much better these days then it was 10 to 15 years ago and the need for ECC memory has diminished generally speaking.

With that said I would gladly trade off ECC memory use for a GPU that can do h.265 if the decision was between equal grades of CPU (XEON vs i7).

Unless you have tons of VC-1 codec based files that can’t be decoded using multiple threads then you shouldn’t worry much about single core passmark vs total passmark scores since it won’t matter for Plex. If you don’t have lots of VC-1 then just go for higher passmark all other things equal.

Heck even my 1st gen i7 running at 2.8Ghz will process VC-1 based files just fine. My i3 and i5 laptops will process VC-1 fine as well.

But with HW encoding even the VC-1 codec can be handled via QuickSync (GPU) on everything from Haswell to current generation so even this is a moot point shortly when Plex releases HW transcoding into the main production branch.

Carlo

@cayars I disagree, and I assume we will just be agreeing upon never agree on this stuff, as per usual. But for the OP’s sake I will say this.

Running FreeNAS, or ZFS in general, without ECC can be done. There is nothing inherently preventing it. But it is non-advisable due to the issues that can arise from it. You have to have really, really good backup policies enforced for it to be viable as a solution, and you (cayars) spoke nothing of this nor of the other disadvantages to not use ECC with FreeNAS/ZFS. Better to point the user to FreeNAS and its excellent forum where all this is discussed at length instead of simplifying stuff in this manner. It can lead to very bad consequences.

edit: Certainly. I am at fault for putting the difference in my passmark example to such a large number. 5K in difference could probably be swayed in such manner as of which you spoke (though if your collection is of such streamlined manner - why transcode at all?). But, say the difference was just… 2K between a 5+ old dual Xeon that hungers for your electricity socket and a new single Xeon CPU with 4 cores? Then, I still stand by my statement (and so do many others). That nuance was lost, I was at fault and I am sorry for that.

No we don’t disagree. That’s why I said “most environments (but not all)”. I almost specifically mentioned ZFS but I figured anyone running it would know better and could make their own decision. But surely you can run ZFS without ECC RAM without the horrific told tales in the forums. Data destruction is way over blown by those that don’t understand the underlying code. I’ve tried to destroy data this way and it’s hard. Even without ECC RAM is arguably still better then some other file systems.

A decent high level read can be found here: http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
Yes it doesn’t cover every situation but will give many peace of mind even when running ZFS without ECC RAM.

If your running a more mainstream/popular operating system such as iOS, Linux, Windows, NAS versions, docker with typical storage systems it’s even less an issue. FreeNas isn’t much different. They say “If your system supports it and your budget allows for it, install ECC RAM” but again that doesn’t say it’s required. I could say the same for any server/computer if you are in a mission critical environment. I wouldn’t consider most Plex servers to run as mission critical.

So while maybe will have to agree to disagree (it’s ok) I’d stand by my original thought which is to take the i7 without ECC memory with a state of the art GPU over an equivalent XEON without GPU but with ECC memory for strictly Plex use. But I personally wouldn’t be using ZFS or FreeNas with it either, not because I couldn’t but because I prefer other operating systems and storage systems over them.

It really comes down to personal opinion and objectives when selecting the hardware. If you know you want/need to run ZFS for example then you would generally choose the hardware that’s recommended to be safe. If on the other hand you’re going to run Windows or similar then you might have different objectives in hardware selection.

@cayars said:
ECC Ram is way over rated these days for most environments (but not all). We aren’t talking about encrypted files or scientific files where a bit matters a great deal. These are video files that can be replaced if one happens to go bad.

Well, personally I’d rather my NAS halt and alert me to the fact that there’s a memory issue versus just run with the corrupt bit, resulting in cascading corruption and leading to a trashed pool and me needing to rebuild my entire pool/NAS and re-rip all my media. That would be a multi-day (or longer) nightmare compared to getting the early, immediate warning about the bad RAM and setting up an RMA to get just that replaced, at which point I’m right back up and going right where I left off.

Time is money, and the price difference between ECC and non-ECC is not that great. It’s cheap insurance. I might go 10 years without needing the safety of my seatbelt or fire extinguisher but I invest in those too.

But with HW encoding even the VC-1 codec can be handled via QuickSync (GPU) on everything from Haswell to current generation so even this is a moot point shortly when Plex releases HW transcoding into the main production branch.

For someone who puts so much emphasis on video quality with how much you talk about pre-transcoding versus on-the-fly transcoding, I’m surprised how much you trumpet hardware transcoding which Plex themselves has admitted with always produce inferior video quality to the on-the-fly transcoding. I have no real faith in it and as a result have entirely dismissed it as an option, not the least of which because I don’t see Plex ever properly supporting it within a FreeNAS jail and with an add-on GPU.

ECC memory is a reliability or risk mitigation decision obtained at a modest but definite cost and performance penalty.

On ECC protected systems - typically servers - when the hardware detects a single-bit error, it’s automatically corrected and system operation continues normally. The OS or BIOS may be able to report the corrected memory issues to permit orderly replacement of the defective memory at a later time. A multiple bit error can be detected but typically not corrected and the effect is an instant system halt in the form of a kernel panic, BSOD, etc.

On non-ECC protected systems - your average home PC - when the hardware detects any memory error then it’s straight to the system halt.

Non-ECC systems might not be as good at detecting all memory errors but in my experience undetected memory errors are isolated and rare. Neither arrangement tends to result in sustained operation with memory errors.

@dduke2104 exactly. It’s even way more in depth than this. People think it automatically triggers alerts and keeps all forms of memory errors from happening and guarantees your data will be safe. It doesn’t. It can help with very specific types of error that frankly doesn’t happen much at all today compared to 10-15 years ago. Memory chips are of much better quality today then ever. The CPUs use different refresh rates then they used to. CPUs now a days directly control clock speed switching, rather than the OS. Software in the CPUs can use memory in a parity fashion to check the checksums of what it’s addressing using any type of memory for critical things, etc. Memory is much, much more dense then it used to be when EEC memory was introduced. The more dense the memory packaging the less effective ECC memory is. Technology has come a long way and ECC memory for our media needs isn’t even on my personal radar of things to worry about regardless of operating system or storage solution.

Below paragraph taken from https://en.wikipedia.org/wiki/Soft_error:
Soft error rate (SER) is the rate at which a device or system encounters or is predicted to encounter soft errors. It is typically expressed as either the number of failures-in-time (FIT) or mean time between failures (MTBF). The unit adopted for quantifying failures in time is called FIT, which is equivalent to one error per billion hours of device operation. MTBF is usually given in years of device operation; to put it into perspective, one FIT equals to approximately 1,000,000,000 / (24 × 365.25) = 114,077 times more than one-year MTBF.

I myself could live with orders of magnitude higher error rates than this and not be bothered by it. This to me is so inconsequential it’s not even worth considering for Plex use. We aren’t running mission critical systems that run a single operation for months on end using petabytes of memory. So going back to the original premise for Plex of a XEON with EEC memory without GPU VS i7 with GPU and no ECC memory it’s a no brainier to me. I could even argue that much of the “hard” processing on the i7 will take place in HW vs CPU on the XEON reducing risk. :slight_smile:

For those that still think ECC memory is so important for use by ZFS or similar systems here is what Matt Ahrens co-founded of the ZFS project at Sun Microsystems has said:

_There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
_

ECC memory has a place and need but I don’t think it works the way many people think it does as some type of “magic memory fix” because it’s not. I personally would worry more about the operating system, the software running on the computer and especially the HDD drives then the memory as they are far more likely to be the cause of loss then your memory.

If your CPU/system of choice supports ECC memory and you buy into the need for ECC memory then by all means go for it but if not then it’s probably nothing to worry about either.

@sremick said:
For someone who puts so much emphasis on video quality with how much you talk about pre-transcoding versus on-the-fly transcoding, I’m surprised how much you trumpet hardware transcoding which Plex themselves has admitted with always produce inferior video quality to the on-the-fly transcoding. I have no real faith in it and as a result have entirely dismissed it as an option, not the least of which because I don’t see Plex ever properly supporting it within a FreeNAS jail and with an add-on GPU.

Actually that is not what has been said or I would have jumped on that for a correction. It depends on how you are using HW transcoding.

This was a good observation so let me explain to make it clear. I prefer to pre-process my files so they will use a standard set of codecs that will direct play on as many devices as possible (assuming no bandwidth restriction is in place). In order to do this I prefer to use ffmpeg software encoding with fine tuned optimizations that give me super high quality and the smallest file size (lowest bitrate needed). This transcode could take any amount of time needed to do the conversion. It could be 2 to 5 times the run time. To me it doesn’t matter as I want the highest quality possible since this file will be archived and become part of my library.

Now if a person is bandwidth restricted and my system has to transcode then it can’t take 5 times the runtime to do the conversion. It has to do it in a shorter time frame then the runtime obviously. If my system has to handle 2 transcodes of this nature that would effectively mean each transcode needs to finish in 1/2 the runtime. 4 transcodes would mean 1/4 of the runtime is available to transcode, etc

So in order to do real-time transcoding you have no choice but to sacrifice quality and bitrate so the software can encoding faster. The faster you need the transcoder to work the more it has to give up. When this is done via software the quality and file size is much worse for the real-time encode vs the pre-processed encode I personally do. This I think will make sense to most people.

Now enter HW encoding. It can get close but not quite as good as the pre-processed files. On some films you may not even be able to tell the difference in visual quality while other films you may. However if you look at bitrate needed (or file size) you will still see a difference where the pre-processed file is still better for archiving. So the HW encode sits in the middle of the first two but usually much closer to the 1st.

Now if you must transcode in real-time and have HW transcoding available that can handle multiple transcode in HW with much better quality then what you can do in software it’s a no brainier.

How good your HW encoding is compared to your real-time software encoding of course depends on individual systems, how many transcodes are presently going on, how much CPU you have available, what version of the GPU you have (how new or old it is - newer = better).

But hopefully that’s enough info to realize there are different ways to use transcoding such as pre-processing vs real-time and why one may be better then the other depending on what you are trying to do.

Carlo

@cayars said:

@sremick said:
For someone who puts so much emphasis on video quality with how much you talk about pre-transcoding versus on-the-fly transcoding, I’m surprised how much you trumpet hardware transcoding which Plex themselves has admitted with always produce inferior video quality to the on-the-fly transcoding. I have no real faith in it and as a result have entirely dismissed it as an option, not the least of which because I don’t see Plex ever properly supporting it within a FreeNAS jail and with an add-on GPU.

Actually that is not what has been said or I would have jumped on that for a correction. It depends on how you are using HW transcoding.

This was a good observation so let me explain to make it clear. I prefer to pre-process my files so they will use a standard set of codecs that will direct play on as many devices as possible (assuming no bandwidth restriction is in place). In order to do this I prefer to use ffmpeg software encoding with fine tuned optimizations that give me super high quality and the smallest file size (lowest bitrate needed). This transcode could take any amount of time needed to do the conversion. It could be 2 to 5 times the run time. To me it doesn’t matter as I want the highest quality possible since this file will be archived and become part of my library.

Now if a person is bandwidth restricted and my system has to transcode then it can’t take 5 times the runtime to do the conversion. It has to do it in a shorter time frame then the runtime obviously. If my system has to handle 2 transcodes of this nature that would effectively mean each transcode needs to finish in 1/2 the runtime. 4 transcodes would mean 1/4 of the runtime is available to transcode, etc

So in order to do real-time transcoding you have no choice but to sacrifice quality and bitrate so the software can encoding faster. The faster you need the transcoder to work the more it has to give up. When this is done via software the quality and file size is much worse for the real-time encode vs the pre-processed encode I personally do. This I think will make sense to most people.

Now enter HW encoding. It can get close but not quite as good as the pre-processed files. On some films you may not even be able to tell the difference in visual quality while other films you may. However if you look at bitrate needed (or file size) you will still see a difference where the pre-processed file is still better for archiving. So the HW encode sits in the middle of the first two but usually much closer to the 1st.

Now if you must transcode in real-time and have HW transcoding available that can handle multiple transcode in HW with much better quality then what you can do in software it’s a no brainier.

How good your HW encoding is compared to your real-time software encoding of course depends on individual systems, how many transcodes are presently going on, how much CPU you have available, what version of the GPU you have (how new or old it is - newer = better).

But hopefully that’s enough info to realize there are different ways to use transcoding such as pre-processing vs real-time and why one may be better then the other depending on what you are trying to do.

Carlo

On the topic of HW transcoding, do you know if there’s a hard limit on the number of simultaneous transcodes in regards to each CPU model? I’m using an e5-1650 v4 right now, but I’m debating switching to an i7-7700k for the HW transcoding if it can support a high number of transcodes since their passmark scores are pretty close and the i7 has a better single core speed.

Also, if the HW transcoder runs out of resources, does it fall back to software for the remaining transcodes? I wasn’t sure if it was all or nothing in terms of HW transcoding.

Maybe someone in the know (like a dev) could comment but to the best of my knowledge there is no hard limit except the Nvidia consumer cards which are limited to 2 encoding streams. In the present beta it does not do any fallback based on resource use or lack of codec support (ie GPU doesn’t support HEVC). These functions are coming we’ve been told but right now the focus is on testing the core HW transcoding to get that working perfect before adding the bells and whistles.

How many streams HW of SW is always just a “general” approximation since the original media resolution, bitrate and codecs matter as well as what format you are transcoding to.

I was busy these past few days, I just saw all of the posts, I’ll take the time to read them and give you my feedback

Thank you for taking the time and energy in helping me in my decision

Cheers to all