Migrating to a RAID solution?

I currently got those drives…

  • WD BLUE 500 GB
  • WD RED 4 TB
  • WD SE 4 TB
  • WD RED 6 TB

Tomorrow I’ll grab another RED 6 TB… I confess I made a mistake from the beginning to buy different drives… I should started with two WD RED 6 TB drive and only getting those from the beginning…

Currently I use stand alone drives, no software for backups or mirrors… except I got my music library both on the 4 TB SE drive and a current copy of the main folder as “Music (Backup)” at the 6 TB RED…

The “music” library is about 1,3 TB atm. But it’s expanding fast… I rip a lot of music, buying a lot… So… I expect the music library to be at least a few TB soon… I aim for about 1 million tracks… That’s my first goal. That includes 90 % FLAC… some MP3’s and some high res such as 24 bit FLAC and maybe DSD… Today I got about 50 000 tracks… So a few TB music from the beginning… But all the other space, those 13 TB are “movies” and “TV shows”. Without any backups… I got at least 60 - 70 % of this content on Blu-Ray.

SO… how could I migrate… I will NOT buy multiple drives in one go… Just ONE drive tomorrow… I’m a bit frustrated since I don’t know how to manage the migration… First thing, I don’t really know how to “move” this content to a RAID solution without formatting these drives… And I don’t know much about RAID’s… I don’t want to use a hardware RAID… But I know there are A LOT “RAID” stuff existing… Like RAID0, RAID1, RAID 5, RAID 6, RAID 10, RAID Z1, RAID Z2 and OS/Software like unRAID and FlexRAID. I have not learned much about these stuff… What is the best “RAID” solution for a PLEX server? What’s recommended? I mainly looking to improve stability, durability, security, performance and flexibility.

Any help would be appreciated… I might start out with a second 6 TB RED drive and selling those other drives in the future… I will NOT sign up for any cloud service… at least not in the long term. Maybe I can spend a few $ just temporary for the migration.

I think you should look into unRAID. You can use hard drives of different sizes and you can add drives one by one to expand. You will need to have a dedicated computer to run unRAID.

If you just want something to manage all your drives on a Widows computer you could use StableBit DrivePool.

@skubiszm said:
I think you should look into unRAID. You can use hard drives of different sizes and you can add drives one by one to expand. You will need to have a dedicated computer to run unRAID.

If you just want something to manage all your drives on a Widows computer you could use StableBit DrivePool.

I currently use a home built “server” with CentOS. I heard unRAID is really good, I might read a bit more about it. Thanks. Does it support different drives?

Save yourself all the trouble and just get a Synology NAS and be done with it.

@interconnect said:
Save yourself all the trouble and just get a Synology NAS and be done with it.

Nah, I don’t know… I spent maybe 3 - 4 k $ on this thing since 2013… I don’t really like to swap it out for a Synology NAS just because of simplicity… It might be a good solution for some, but I won’t get all the benefits and power I’m looking for… No Synology NAS (at least on the cheaper end) can provide things like ECC RAM (maybe a future upgrade) or got the power of my current i7 4790k CPU. I have been studied 5 years of computer- and systems sciences… And been working in the industry for some time before I got on sick leave for a few years. I don’t tell I’m an expert… But I’m able to handle CentOS, I understand how to use the terminal etc. That’s for sure…

@SupreX said:

@interconnect said:
Save yourself all the trouble and just get a Synology NAS and be done with it.

Nah, I don’t know… I spent maybe 3 - 4 k $ on this thing since 2013… I don’t really like to swap it out for a Synology NAS just because of simplicity… It might be a good solution for some, but I won’t get all the benefits and power I’m looking for… No Synology NAS (at least on the cheaper end) can provide things like ECC RAM (maybe a future upgrade) or got the power of my current i7 4790k CPU. I have been studied 5 years of computer- and systems sciences… And been working in the industry for some time before I got on sick leave for a few years. I don’t tell I’m an expert… But I’m able to handle CentOS, I understand how to use the terminal etc. That’s for sure…

Gotcha. You can run a separate server though. That’s what I do. I have a server that runs PMS and a NAS that just stores the media. Best of both worlds. NASs aren’t really designed to do transcoding and the like. Some of the newer models are attempting to, but they’re still very underpowered.

I currently use a home built “server” with CentOS. I heard unRAID is really good, I might read a bit more about it. Thanks. Does it support different drives?

unRAID will be very close to a Synology NAS, just using your own hardware. unRAID does support different sized drives. Synology requires all drives to be the same size. The only requirement to unRAID, is your parity drive must be greater than or equal to your largest drive. So all you will need to buy is an additional 6 TB and use that for parity.

Unless your drives are formatted to ReiserFS, XFS or Btrf you will have to format them in unRAID. So you might have to do some shuffling around between drives when you get started.

You also might want to check out SnapRAID. You should just add that right into your existing CentOS system. Its a little more hands on, but its built specifically for media servers. I also use it on my personal server.

If you load a windows OS you could also look at something like drivebender or stable bit drive pool

RAID is not something you want to get into and not be 100% certain on how it is going to work for you.

If you go with something like Drivebender or stablebit drive pool you can truely grow as you need to, use varying sizes with no concern of negative impacts to system realiablity and don’t have to worrya bout loosing your data if something happens to the raid array.

I use Drivebender on my Windows home server. The drives i have are 2x3TB Segate XT drives, 6TB WD Red and a 10TB Seagate Ironwolf drive. I also have a 2TB Seagate LP drive for backups of my WHS server environment. On my system i have 25TB of space between the drives and if i want protection i simply tell Drivebender to manage duplication of a folder with important data. It will manage writting the duplication to other drives in the pool to ensure no one drive failure can cause a loss of that data.

The key advantages of drive bender/Stablebit drive pool are:

  1. No single failure can cause you to loose all your data. You can only loos what is on the drive that fails and if it is duplicated in the pool that means nothing.
  2. The hard drives are written to like any other NTFS volume so even if your system suffered a major failure you could plug the drives into any system and retrieve your data
  3. There are truely no limitations based on different drive models or sizes. Because the drives function to the os just like a regular attached drive you can use any drive you want with the pool and it just works.

The only negative to these pooling techs is related to performance. In general you are limited to the performance level of a single drive. This is simply because of how they work. If a RAID environment is built right it can allow the system to use mutlitple drives at once and that can increase performance. As far as plex is concernd though that shouldn’t be a issue because it is very unlikely you will have the bandwith to need that kind of performance.

My concern here for you would be getting into something you are not 100% sure about. To do RAID well you really need to know what you are doing and get good hardware, otherwise you are setting yourself up for a major dissappoint later.

You should also look at FreeNAS.

FreeNAS uses something called the ZFS file system. It is very flexiable and expandable. The ZFS file system is what sets that system apart. It has allot of benifits over pretty much anything else out there because of some of the advanace file system functions ZFS does.

If i was going to start over and build everythig from scratch i would probably go that route. It isn’t a super simple solution, but has allot of potential as you learn to use it.

To migrate to FreeNAS i would do the below

  1. Load FreeNAS on flash drive and have just the new 6TB drive in your server.
  2. Configure new 6TB drive as a single pooled drive
  3. Put older drives in another computer copy data to new 6TB drive on free nas system
  4. Take older 6TB drive install in FreeNAS server and then update pool to add it to the vdev with new drive in zraid1 configuration
  5. Let system finish migration and be done.

Is this simply because you want one large place for media files?

@skubiszm said:
unRAID will be very close to a Synology NAS, just using your own hardware. unRAID does support different sized drives. Synology requires all drives to be the same size.

That’s not entirely true. Though, on some series SHR needs to be activated within a configuration file.

So much helpful responses in a quite short time… Thank you. A LOT of things to answer here…

@interconnect said:

@SupreX said:

@interconnect said:
Save yourself all the trouble and just get a Synology NAS and be done with it.

Nah, I don’t know… I spent maybe 3 - 4 k $ on this thing since 2013… I don’t really like to swap it out for a Synology NAS just because of simplicity… It might be a good solution for some, but I won’t get all the benefits and power I’m looking for… No Synology NAS (at least on the cheaper end) can provide things like ECC RAM (maybe a future upgrade) or got the power of my current i7 4790k CPU. I have been studied 5 years of computer- and systems sciences… And been working in the industry for some time before I got on sick leave for a few years. I don’t tell I’m an expert… But I’m able to handle CentOS, I understand how to use the terminal etc. That’s for sure…

Gotcha. You can run a separate server though. That’s what I do. I have a server that runs PMS and a NAS that just stores the media. Best of both worlds. NASs aren’t really designed to do transcoding and the like. Some of the newer models are attempting to, but they’re still very underpowered.

Yes… That’s actually my plan. Or somewhat… I currently use the Fractal Design R5 chassis… Which means I got eight 3,5" drive bays and two 2,5" drive bays… I use one of those 2,5" bays for a 256 GB Samsung 850 Pro drive as a boot drive… Then I got those other drives!

Here it is…

Waiting for the next 6 TB drive… It will arrive shortly after this weekend.

I feel it would be stupid to not keep utilize those bays… Then, if I like to have even more space I might get a NAS which I connect to my server, through my router (Asus RT-AC68U)… Especially the Synology DS1815+ has catched my interest… I DON’T know when I’m up for actually investing in NAS device… I have to invest in a SATA controller card, then I might also exchange 3 of the current drives… for WD RED 6 TB drives as well… SO I can use a complete RAID in all bays. So if that’s the case I would have 8x6 TB… Then in the future, next time I upgrade CPU and mobo, I can get ECC RAM, a XEON (or similar from AMD) as well as a m.2 SSD. Then I’ll use the M.2 SSD as a boot drive and I’ll use the the free 2,5" bays on the backside (not shown in the image) and there I can add 2 other drives… It could either be SSD’s (maybe up to 2x4 TB) or 2x2 TB SATA drives… Then with something like the Synology I could have up to 8x8 TB or 8x10 TB… In total that’s approx 8x10 TB + 2x2 TB + 8x6 TB… That should be about 132 TB in total… Which is all I ever need… Including a complete backup/mirror and a complete library of HD and 4K content except for rare content which is only available in lower bitrates. Music library is 95 % lossless, including a lot 24 Bit FLAC…

Previously I was thinking I should build a seperate “rack server” (different thread) but I felt it’s better to keep upgrading my current built and then just purchase such a neat NAS cabinet to keep somewhere bookshelf or something… 132 TB (realistic example) is almost enough to host a similarity to Spotify or Netflix at home… and that’s my plan. For personal use… :smiley:

This is long term project… Let us say it takes at least 5 years before I’m done with this thing…

@skubiszm said:

I currently use a home built “server” with CentOS. I heard unRAID is really good, I might read a bit more about it. Thanks. Does it support different drives?

unRAID will be very close to a Synology NAS, just using your own hardware. unRAID does support different sized drives. Synology requires all drives to be the same size. The only requirement to unRAID, is your parity drive must be greater than or equal to your largest drive. So all you will need to buy is an additional 6 TB and use that for parity.

Unless your drives are formatted to ReiserFS, XFS or Btrf you will have to format them in unRAID. So you might have to do some shuffling around between drives when you get started.

You also might want to check out SnapRAID. You should just add that right into your existing CentOS system. Its a little more hands on, but its built specifically for media servers. I also use it on my personal server.

SnapRAID

Thanks for the tip… The optimal solution would be to not swap OS… I spend several hours to customize CentOS… ofc I could clone it… and use the same stuff in unraid. But that could be a mess… Then I don’t like the idea to use a USB stick…

My drives are formated in NTFS except one that is formated in FAT32… Most interesting solution right now would be either something like unRAID that doesn’t require a USB or is not an separate OS. Else I think I might go for ZFS, which I think might be the best option… I’m now been reading a bit RAIDZ2 and RAIDZ3 which seems to really good… Is it possible to implement those on CentOS? My main problem here is that I need 5-8 drives to even make it possible… My thought has been to use my drives as “stand alone” drives until I got I totally own 5 or 8 WD 6 TB drives and then I’ll “borrow” other drives, or simply renting storage in the cloud for a day… Then I’ll transfer ALL the data there, formating all my drives and installing RAIDZ2 or RAIDZ3 in CentOS 7… Then simply move all the data back… You think that would workout?

@mavrrick said:
You should also look at FreeNAS.

FreeNAS uses something called the ZFS file system. It is very flexiable and expandable. The ZFS file system is what sets that system apart. It has allot of benifits over pretty much anything else out there because of some of the advanace file system functions ZFS does.

If i was going to start over and build everythig from scratch i would probably go that route. It isn’t a super simple solution, but has allot of potential as you learn to use it.

To migrate to FreeNAS i would do the below

  1. Load FreeNAS on flash drive and have just the new 6TB drive in your server.
  2. Configure new 6TB drive as a single pooled drive
  3. Put older drives in another computer copy data to new 6TB drive on free nas system
  4. Take older 6TB drive install in FreeNAS server and then update pool to add it to the vdev with new drive in zraid1 configuration
  5. Let system finish migration and be done.

FreeNAS isn’t the best option for me as I use this as my only desktop as well… For browsing the web, reading my personal mail etc. I currently don’t have a laptop, just a HTC smartphone and this built. I don’t like to convert this thing to a NAS device, I want to use it as PLEX server mainly and for ordinary browsing Desktop. And I feel been pretty comfortable with CentOS now… I use Windows 10 in a VM, for those things I need such as ripping movies or music… Since I prefer dbPowerAmp, EAC and DVDFab over those Linux options. I started listening to my music directly from the server in the VM using Foobar2000 and the ASIO plugin… Then I use the Sennheiser HD800 and the Essence STU DAC by Asus… Which I connected through USB (to the server) and toslink to the TV. So… I think I skip unRAID… But I’ll also have a look into SnapRAID before I decide for ZFS.

Parts of my previous answer to @skubiszm was also related to you… It’s somewhat the same thing as you propose but I simply want to keep CentOS 7 as a OS. And I think I been reading somewhere that RAIDZ2 is recommended for 5 drives (at least), then that’s 3+2 drives… which allows 2 drives to fail… and RAIDZ3 for 8 drives… at least. AS I remember I been reading that either at the FreeNAS forums OR somewhere else… Maybe at SUN if it was their official recommendations…

Why should I run RAIDZ1? I think it’s good to be able to save 2 drives if they both fails in a short amount of time… I don’t know if RAIDZ3 might be an overkill… at least for 8 drives only. Even if it’s the minimum requirement.

HOW do I implement RAIDZ (1,2 or 3?) in CentOS 7? Are there any guides for this? Maybe I can just Google but do you recommend any specific one?

@mavrrick said:
Is this simply because you want one large place for media files?

Both yes and no… Sure, it’s a nice thing to have all the files into one big place… I have to admit I like that idea… But there are several other benefits of using a RAID… Such as redundancy and performance… It might not be a substitute for a backup, but it is still a better option than no backup and no RAID. Having “132 TB” of data (as I previously mentioned), such as 4K content etc. I guess it might even make library scans faster.

Synology NAS’s have their share of problems and grief as well. One of the Employees is struggling with a rebuild right now on his Synology as he hit one of the “gotchas” when expanding.

Be careful with traditional RAID and for god sake don’t even consider RAID 5. Use RAID 6 at a minimum. If you’re planning to go (real) RAID then prepare to purchase like drives with same firmware versions meant to be used with RAID such as WD Red drives.

The problem with many NAS and RAID solutions is that if you loose a drive you start to get into trouble. By that I mean if you are using RAID 5 you only have one parity drive so you must replace it with a like drive immediately and rebuild it. If you lose another drive during this process you lost everything. With RAID 6 you have two drives of parity so you still have a spare but if you were to lose another drive during this process you are going to start worrying quickly. Things that people don’t realize is that without the parity any errors that pop up on disks will manifest themselves on rebuilds and will destroy your data since you will get parity miscalculations. You won’t know you have bad media until it’s to late. If you must use RAID then go with at least RAID 6 of like drives.

I personally wouldn’t ever build a RAID system with piece meeled drives but would order them all up front from the same batch including a couple of spare drives for hot swap. This isn’t a requirement but helps to avoid problems.

My vote of course would be to go with Windows and install DrivePool along with SnapRAID. SnapRAID isn’t RAID (bad name for it) but creates parity discs of your main disks. Yo u can have from 1 to 8 parity discs and can change this number up at any time. If you have 1 parity disk then you could safely loose 1 disc in your main drives and be able to restore. If you have 8 parity discs then you could loose up to 8 discs in your main drives and still be able to restore. So it’s much more powerful.

DrivePool simply “connects” a bunch of discs and makes them look like one big drive. Put in quantity 4 of 6TB drives and you have a 24TB drivepool. You can still access and use the individual drives or the drive pool drive letter. You can pool a disc and move it to another computer and still read the contents since it just a normal drive.

Drivepool doesn’t care what brand of disc or size. It just combines them all. They can be internal drives, external drives connected via USB or eSATA. As long as they are connected and mounted on that computer they can be combined into a big drive pool. Let’s say you had 10 drives in your pool and 1 drive crashed and you didn’t have SnapRAID parity or backup of any kind. Worst case is you lost data only on the crashed drive. The other 9 drives still work fine and are part of your pool.

The DrivePool/SnapRAID combination isn’t a good fit in a business environment with fast changing files but for a video environment where the data doesn’t change but we just keep adding to the libraries it’s a perfect fit.

BTW. DrivePool will only combine “local” drives attached to the computer it is running on. However SnapRAID can create parity on drives attached to your network. I have two smaller NAS units that I use for this purpose plus another “file server” I use for parity. So that gives me over a dozen drives in my pool and 3 parity “drives”.

This to me is a super flexible system that will withstand crashes and loss of media and even if you didn’t run parity via SnapRAID would allow me to only lose a small part of my library should I have a drive crash.

I run SnapRAID, DrivePool and Plex on the same old 1st gen i7 computer with no problems at all. I’ve been playing with a 2nd DrivePool installation on another computer that I can then mount on my main Plex server as any other network drive.

SnapRAID can still be used to create parity across the “whole pool” since it doesn’t care where any of the drives exist. Of course creating parity for drives that are network drives is slower but it works just fine. It keeps up with the rate I add files to my library.

Carlo

A few things.

Running a raid array without a good hardware controller solution or ZFS with plenty of memory and other hardware optimizations are not going to improve performance much if at all. It will simply be for redundency so that some number of drive failures won’t kill access to your data. That number is based on the raid tech used. . With the size of the drives you are talking about you want atleast double parity and the more you can get the better. One of the hardest things a raid array will every do to hard drives is rebuild an array. ZFS does a little bit better job because it only rewrites where data was, but it is still the time you are likely to cause a failure if it is likely to happen.

I would also suggest you stear away from RAID if this is your many computer and not dedicated to NAS like functions. Powering drives on and off can be a risky thing and will cause a higher likely hood of failures as well. I cross my fingers each time i do a restart.

RAID is great for enterprise because in that environment you often times build the machine with planned replacement in mind. I mean you know it is going to be replaced in 3 years or so so you spec out your configuration with raid to last that amount of time and build it completely out then. This almost never applies to the home environment

You can likely use ZFS with Centos as i believe it can be added to the distro once installed. I did that at one point when testing ubuntu. You may want to look into that.

FreeNAS does have VM built into. More specifically it uses JAILS to enable VM’s so you could load windows on it as well. I am not sure about how it would work with your USB attachments though.

Keep in mind that when you say 132 TB that much of that could be lost for parity/redundency. For example if you run raid 6 with 4TB drives you only get 20TB of space. in your config above you mention 8x10, 2x2tb, and 8x6. Assuming you built it out completely with parity drives running raid 6 that would be 60TB+2+36 so closeer to 98TB. This also assumes you use the least amount of parity drives. If you mirrored it which would provide much better data integrity it would be 66TB. and these will be in different arrays so if you want them to show as one space you will need some method of pooling them together still

Though raid can work with drives of different sizes there are issues with it(it isn’t recommended by anybody involved with hardware raid solutions). Most common is the fact the raid is figured based on the smallest drive size. The example is that if you try to raid a 2TB drive 2 3TB drive and a 8Tb drive you would end up with only 4TB of space because the raid array would be setup as 2Tbx4 drives -(2TB *2parity drives).

Since by your own admission you are familiar with VM i would suggest you build a few VM’s with some of the different options mentioned and test them on your rig. I have done this myself when reviewing the technology.

Stablebit makes DrivePool and Division M make Drivebender. Both have been around for a long while and are pretty well proven on the windows platform. Personally I would just use them with your setup and just set the folders you are concerned about to be duplicated. That is what i have done for anything that i care about and can’t reproduce fairly easily.

Happy eastern (if that’s the case) and thank you very much @mavrrick and @cayars for your valuable inputs. I have to say… I might reconsider and skip the RAID solution… You’re right… It might not be worth it. I might go for a drivepool with backup solution instead… That might be simplier solution… However I won’t go back to Windows again… Except as a VM or for different devices such as an laptop.

I going to lookout for a drivepool firmware for CentOS 7 is mature/stable, easy to use, accept different drives and in best case doesn’t require to format the drives… if such software exist.

EDIT: Looks like LVM is what I’m looking for…?
EDIT 2: LVM seems to be pretty cool… Reading about “hybrid volumes” such as getting an SSD that will act as a caching device… Just an example… If I got a 100 TB LVM volume… With WD RED drives. Then if I would be using something like a Samsung 950 Pro NVME M.2. SSD for caching? Could this improve the performance of PLEX any much? I.e. scanning the library?

SSD caching normally doesn’t work that way. Generally SSD caching works by keeping frequently used data on the SSD and the offloading less frequently acceded data on the spinning disk. That would keep the stuff plex looks at off the slow drives and nullify the benefit.

With 100TB of storage the best way to improve performance would be a good Raid setup with a high-end hardware raid controller card or a ZFS raid setup setup with plenty of RAM.

Love RAID, unRAID isn’t RAID though, Cayars is spot on that you want RAID6/RAIDZ2 (RAID5 is dead, don’t use it). Love FreeNAS (use it myself)… keeping my input brief as I’m late to the party. Good luck and keep asking questions.

A question that just appeared. I made a fast Google search and found this ZFS guide;

https://linoxide.com/tools/guide-install-use-zfs-centos-7

Should I install ZFS on the external drives and not on the boot drive? That’s the proper way to do it?

@sremick said:
Love RAID, unRAID isn’t RAID though, Cayars is spot on that you want RAID6/RAIDZ2 (RAID5 is dead, don’t use it). Love FreeNAS (use it myself)… keeping my input brief as I’m late to the party. Good luck and keep asking questions.

I don’t like to learn FreeNAS atm at least. I been considering FreeNAS several times last years but I think pass. I don’t like to spend money on a NAS solution since and I want to stick to CentOS 7 as my server.

@mavrrick said:
SSD caching normally doesn’t work that way. Generally SSD caching works by keeping frequently used data on the SSD and the offloading less frequently acceded data on the spinning disk. That would keep the stuff plex looks at off the slow drives and nullify the benefit.

With 100TB of storage the best way to improve performance would be a good Raid setup with a high-end hardware raid controller card or a ZFS raid setup setup with plenty of RAM.

PLENTY of RAM is how much? For a starter I might go for 80 TB instead of 100 TB+ since I got only 8 bays in my Fractal Design chassis. Maybe I build or buy a NAS as the next step afterwards. Two weeks ago I sold all of my drives except the 4 TB RED music drive. I trying to make a list of my previous movies and TV shows to get it back on sight.

I think a great new setup could be like:

  • Cheaper Xeon (~350 $)
  • Supermicro S2066 ATX motherboard
  • Samsung 960 PRO 512 GB NVMe SSD
  • At least 32 GB ECC RAM. Do I need more?
  • 10 TB Western Digital GOLD drives.

The 10 TB GOLD drives with 256 mb cache is only $ 23 more than RED atm. Isn’t it a no brainer?