Ubuntu 18.04 LTS
ASUS P6X58D Premium
24GB RAM
Xeon X5670 CPU
Trying to decide on the appropriate storage configuration for my Plex Server rebuild. I have an X58 motherboard with an LSI 9260-8i RAID card and 6x 8TB WD Red drives. My current configuration is a bunch of external USB drives dedicated to specific types of media (TV, Movie, 4K Movie, etc) so when one drive fills up, shuffling of media is required to free up space and add more space where needed. I would like to use the 6 WD Red drives and the RAID card for one massive storage volume where all my media can live but RAID5 (and even 6) is not a good idea with drives of this size.
Suggestions? What are you all doing for large storage volumes. I would rather not switch to a NAS device as I prefer to keep Plex and it’s storage all in one physical box (less cables, connections, power, etc)
The logical next step is a RAID volume.
On Linux, mdadm, the software raid works very well.
You create the basic raid set (3 disks for RAID 5, 4 for RAID 6), start transferring content into that set. As more drives become free, you add them (Physical Volumes) into the Logical Volume.
When complete, you’ll have exactly what a NAS provides.
I have a hardware raid card for that. I have 6 drives, all 8TB. I’ve read that RAID5/6 on drives this large is a big no-no because of the length of time it takes to rebuild when issues arise, and the likelihood of encountering a URE while rebuilding.
As a Windows kiddie and someone who knows pretty much nothing about raid 5/6 or indeed Linux other than via scripts what kind of read/write speeds can be expected on such a system.
Apologies @jcarver81 I won’t hijack further, just always been curious.
Raid5/6 is quick in terms of read because the load is spread out over the number of drives you have. Write speed however is vastly reduced because you have to write to all the drives at once, AND calculate parity while doing so. For a media server that is read from, more than written to, it’s a fairly good system.
Much appreciated. I have long pondered changing my setup. As I saw you mention USB I got curious. Though as I upgrade to larger and therefore less drives the need for USB has diminished.
Anyway I will leave you @ChuckPa capable hands.
Hardware raid is infinitely better IF (and this is the big one) you have a spare raid card on premise and it’s a currently supported card. The last you want is the card to fail with no way to access it.
Otherwise, it’s blazing fast. Achilles has a HW raid in his NAS (home made) and it screams. IIRC, he can send 3 GB/sec over the net to the client.
Volume construction for him wasn’t long at all (250TB). I will ask him next I speak with him. As with anything like this. a UPS is required to avoid issues.
I have an LSI 9260-8i RAID controller that will do RAID5 or RAID6. It has a battery backup for the cache and the computer runs on a UPS.
6 drives @ 8TB each in a RAID5 array will give me 40TB usable space. I’m not worried about volume construction time. What I am worried about is volume REconstruction if and when a drive fails and I replace the bad drive and the array has to rebuild parity. From what I’ve read, drives over 4TB have a 30% chance of either failure, or an Unrecoverable Read Error (URE) happening during the reconstruction process and all data on the array could possibly be lost.
I guess I can bump up to 2 more drives and do RAID6 for a net usable storage pool of 48TB…
Given your concerns, there are probably better options available to you.
SnapRaid is a parity solution that does not weld the disks together in such a way that you can lose the entire array all at once during the recovery of an individual disk or disks.
Mergerfs can present an array of disks into one or more discrete mountpoints. There is no welding of disks here either. Each and every disk remains a stand alone disk.