@dduke2104 exactly. It’s even way more in depth than this. People think it automatically triggers alerts and keeps all forms of memory errors from happening and guarantees your data will be safe. It doesn’t. It can help with very specific types of error that frankly doesn’t happen much at all today compared to 10-15 years ago. Memory chips are of much better quality today then ever. The CPUs use different refresh rates then they used to. CPUs now a days directly control clock speed switching, rather than the OS. Software in the CPUs can use memory in a parity fashion to check the checksums of what it’s addressing using any type of memory for critical things, etc. Memory is much, much more dense then it used to be when EEC memory was introduced. The more dense the memory packaging the less effective ECC memory is. Technology has come a long way and ECC memory for our media needs isn’t even on my personal radar of things to worry about regardless of operating system or storage solution.
Below paragraph taken from https://en.wikipedia.org/wiki/Soft_error:
Soft error rate (SER) is the rate at which a device or system encounters or is predicted to encounter soft errors. It is typically expressed as either the number of failures-in-time (FIT) or mean time between failures (MTBF). The unit adopted for quantifying failures in time is called FIT, which is equivalent to one error per billion hours of device operation. MTBF is usually given in years of device operation; to put it into perspective, one FIT equals to approximately 1,000,000,000 / (24 × 365.25) = 114,077 times more than one-year MTBF.
I myself could live with orders of magnitude higher error rates than this and not be bothered by it. This to me is so inconsequential it’s not even worth considering for Plex use. We aren’t running mission critical systems that run a single operation for months on end using petabytes of memory. So going back to the original premise for Plex of a XEON with EEC memory without GPU VS i7 with GPU and no ECC memory it’s a no brainier to me. I could even argue that much of the “hard” processing on the i7 will take place in HW vs CPU on the XEON reducing risk. 
For those that still think ECC memory is so important for use by ZFS or similar systems here is what Matt Ahrens co-founded of the ZFS project at Sun Microsystems has said:
_There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
_
ECC memory has a place and need but I don’t think it works the way many people think it does as some type of “magic memory fix” because it’s not. I personally would worry more about the operating system, the software running on the computer and especially the HDD drives then the memory as they are far more likely to be the cause of loss then your memory.
If your CPU/system of choice supports ECC memory and you buy into the need for ECC memory then by all means go for it but if not then it’s probably nothing to worry about either.