Best hard drives

@rjparker1 said:

@wesman said:
Just and an FYI, 99.9% of all hard drives on the market to day are MORE than fast enough, typically running in the 3+ gbps but you are almost ALWAYS going to be limited by your network 1gbps (110 MBps typically).

Further, a 30GB bluray rip only needs ~4 to 5 MBps to Direct Play… So, speed (for Plex) is not an issue… If Plex is the needed issue, then I would go with something Raided 0/1/5/10 for redundancy issues. Your biggest concern is protecting the data from drive failures, not how fast the drive is.

WRONG. WRONG WRONG. Network is NOT weakest link… hard drives are… PERIOD. That is incorrect info. Network will NEVER be the bottleneck… apps, drives, OS, will ALL interfere LONG before you run out of network bandwidth I will GUARANTEE you that.

You really should educate yourself on TCP/IP, the protocols, the real world speeds, etc before saying such foolish stuff. With a single 1GB network card I can saturate it all day long with my Plex server. I was doing this so often I upgraded my networking a few years back.

And 1GB is 112 MB (big ‘B’) bandwidth… blu-ray needs about ~16 Mbps (little ‘b’) 4 to 5 is more for standard DVD movies.

You could stream roughly 80 blu-ray videos over the SAME network simultaneous on a GIG network. Plex, the OS and the drive could not withstand that much activity… but the network surely can…

Let’s take Juno as an example because the math is easy. 40/8=5MB. Assuming a perfect network world with absolutely no hickups, no overhead, perfect drivers, etc you would have 125MB of bandwidth. So purely in theory you could stream 25 of them. Real world it’s closer to 20 if lucky.

That’s conventional 1080p. How about UHD and similar? Do your movies and TV show magically appear on your server or do you have to upload them to the server? Do you run FTP or anything else on the server?

Can you watch a couple of Bluray videos and upload new media to your server at the same time?

As your Plex server grows in function it becomes very easy to saturate the network link while only using a portion of the IO of your drives (spindle or digital).

Of course I’m playing Devil’s Advocate here in this message and really pushing the limits but for many systems the NIC will be the first bottleneck that needs fixing to up your “pipe” size.

BTW, caching is the key to good disk IO performance. I can still saturate my 4GB network before I can the disk IO.

@cayars said:

@rjparker1 said:

@wesman said:
Just and an FYI, 99.9% of all hard drives on the market to day are MORE than fast enough, typically running in the 3+ gbps but you are almost ALWAYS going to be limited by your network 1gbps (110 MBps typically).

Further, a 30GB bluray rip only needs ~4 to 5 MBps to Direct Play… So, speed (for Plex) is not an issue… If Plex is the needed issue, then I would go with something Raided 0/1/5/10 for redundancy issues. Your biggest concern is protecting the data from drive failures, not how fast the drive is.

WRONG. WRONG WRONG. Network is NOT weakest link… hard drives are… PERIOD. That is incorrect info. Network will NEVER be the bottleneck… apps, drives, OS, will ALL interfere LONG before you run out of network bandwidth I will GUARANTEE you that.

You really should educate yourself on TCP/IP, the protocols, the real world speeds, etc before saying such foolish stuff. With a single 1GB network card I can saturate it all day long with my Plex server. I was doing this so often I upgraded my networking a few years back.

OK I only do this for a living so I am educated, thanks. Your assessment is STILL wrong. I don’t care what you THINK you see, the network (even a single 1GB) is NOT getting saturated… the devices have overhead, they need response time, so the network is not continuously being accessed… 100%, so you are STILL wrong. If you upgraded your “network” all you probably did was swap the router, hardly that “network” we are talking about… again DEVICE is the slow down.

Your problem is the devices themselves, the drives can’t keep up, doing multiple workloads on a SINGLE disk can’t maintain the IO and disk access rate to respond to request, that has NOTHING to do with Network… kid.

So YOU really should educate yourself on electronics… network is merely transferring payload from one point to another, the other stuff in between is what is causing YOUR problem NOT the network… maybe you can READ what I wrote first instead of jumping to conclusions huh?

@cayars said:

@rjparker1 said:

@wesman said:

BTW, caching is the key to good disk IO performance. I can still saturate my 4GB network before I can the disk IO.

Yeah ok whatever, you are telling me that a drive can handle 400 MB which is over 40000 IOPS… and your “network” is saturated… before the drive?

LOL, did you fall and hit your head because you are in INSANE the only way you can even THINK about achieving that performance is by streaming hundreds of movies… from your supposed “4GB” network which I seriously doubt you have because NIC are only 1G and the next step is 10GB, so I KNOW you are lying there is no 4GB NIC, you expect me to believe you have a Cisco 2950 switch in your house and you used ether channel to get a 4GB connection to your computer with 4 NIC as well? Also you have a 24 Disk RAID to get that performance in your computer? Even an SSD drive can get close but you CAN’T get this from a SINGLE PMS server… And you have 10GB in your computer why are you only getting 4GB, you should be at 10Gb… something doesn’t add up, oh I think its because you have NO clue what you are talking about, boy.

Show me the money PROVE any of this is true show me a ipconfig /all , show me your task manager with network stats showing speed AND usage… and show me the PMS that is supposedly able to handle all of this traffic on a single machine… and prove you have a 4GB network… I will hold my breath… NOT

@wesman said:
I didn’t say it was the weakest, rather, I pointed out that almost any drive would work and you should go for reliability over speed.

@rjparker1 said:

@wesman said:
Just and an FYI, 99.9% of all hard drives on the market to day are MORE than fast enough, typically running in the 3+ gbps but you are almost ALWAYS going to be limited by your network 1gbps (110 MBps typically).

See the big “B” ?

(30 gigabytes) / (120 minutes) = 4.16666667 MBps ---- it’s just math… you can put it right into google…

7200 rpm drives have a typical transfer rate of 100 MB/s. 5400 rpm drives are on average about 30% slower than 7200 rpm drives. STILL way more than what you would need for a movie at ~4 to 5 MBps

Raid, isn’t hard. you can buy storage devices that have it already built in.

Do you know what Queues are? SCSI drive has a queue so does SSD, SATA drives do not. Which means even thought they have a theorhetical 100MB/s sustained stream its for a SINGLE stream not a queue so with SSD I can do multiple streams at once, try and do more than 1 thing at a time with SATA and you will instantly know what I mean… SATA is NOT performance drive, and it certainly can’t come close to SSD at any rate… I never said RAID was hard I said it was unncessary… and it still is.

SSD solves the problem nicely even when keeping the SATA drives for media.

@mavrrick said:
Really what you to are going back and forth about is how random I/O impact a disk performance. Streaming one continuous stream is easy or 10 streams from 10 different drives is easy. Streaming 5 from the same disk becomes very difficult as the drive has to move the disk head all over the place to pull data from each video file. Raid provides some improvement, but doesn’t fully escape the impact unless you really go overboard on the card you choose. Simply splitting your files between multiple drives can have potentially the same or better impact.

It also doesn’t matter how fast your media drive is if your os drive is overloaded. This is why many choose to go the route of a SSD for their boot drive.

Yeah that’s all I was trying to say, thank you!

@drinehart said:

@rjparker1 said:

@wesman said:
Just and an FYI, 99.9% of all hard drives on the market to day are MORE than fast enough, typically running in the 3+ gbps but you are almost ALWAYS going to be limited by your network 1gbps (110 MBps typically).

Further, a 30GB bluray rip only needs ~4 to 5 MBps to Direct Play… So, speed (for Plex) is not an issue… If Plex is the needed issue, then I would go with something Raided 0/1/5/10 for redundancy issues. Your biggest concern is protecting the data from drive failures, not how fast the drive is.

WRONG. WRONG WRONG. Network is NOT weakest link… hard drives are… PERIOD. That is incorrect info. Network will NEVER be the bottleneck… apps, drives, OS, will ALL interfere LONG before you run out of network bandwidth I will GUARANTEE you that.

Uh, do you understand networks at all? Most networks will run out of speed quicker than a single modern hard drive. Good desktop hard drives are regularly hitting near or above 100MBps. Gigabit network is 125MBps maximum. Once you have a couple of drives in a RAID0/5/6/10, your storage performance is certainly limited by your NETWORK, not your hard drives.

UH, no they WON’T! show me the stats of YOUR network while streaming… post pics… so I can point out what you seem to be missing… NETWORK is the ability to transfer data from one point to another… you are NOT sustaining network speeds you are reaching limits of mechanical objects like the DRIVE, that’s not the network…

You do realize that those figures for drives are INTERNAL performance right? That’s not disk to disk… its only simulated not realistic.

And you are DEAD wrong… I do virtualization do you know what that is? You can host several machines on a single machine.

The DRIVE IO is WAY higher than any network throughput and it would take several machines like dozens to even ATTEMPT to saturate the network… Corporate networks are still largely 1G… so explain that!.. show me the PICS you have screen prints, show me at the SAME time how your DRIVES are using up 125 MB of performance… because a DRIVE or RAID can’t maintain that performance before the network runs out its not even possible because of APP overhead and drive delays… anyway PROVE it!

I want to see task manager from the SAME machine showing that your drives are close to MAX while the network utilization is 100%… good luck, it’s impossible but if you think you can defeat physics knock yourself out. A drive can’t have 100% pure data utilization there is SOME overhead and that means network will not be saturated because of simple limitations in software and cache.

@aeonx said:
What interests me most of all is how these drives perform specifically for Plex use, i.e. streaming and storing media with large file sizes. I’m less interested in how they work for backups and data.

Also: Do different hard drive brands work together well, or is it advisable to only use one sort for a Plex server?

All drive brands conform to standards, get whatever you like and mix 'n match it won’t matter. Plex doesn’t care about drives as long as they work. SSD is the key to good performance.

@rjparker1 said:

@drinehart said:

@rjparker1 said:

@wesman said:
Just and an FYI, 99.9% of all hard drives on the market to day are MORE than fast enough, typically running in the 3+ gbps but you are almost ALWAYS going to be limited by your network 1gbps (110 MBps typically).

Further, a 30GB bluray rip only needs ~4 to 5 MBps to Direct Play… So, speed (for Plex) is not an issue… If Plex is the needed issue, then I would go with something Raided 0/1/5/10 for redundancy issues. Your biggest concern is protecting the data from drive failures, not how fast the drive is.

WRONG. WRONG WRONG. Network is NOT weakest link… hard drives are… PERIOD. That is incorrect info. Network will NEVER be the bottleneck… apps, drives, OS, will ALL interfere LONG before you run out of network bandwidth I will GUARANTEE you that.

Uh, do you understand networks at all? Most networks will run out of speed quicker than a single modern hard drive. Good desktop hard drives are regularly hitting near or above 100MBps. Gigabit network is 125MBps maximum. Once you have a couple of drives in a RAID0/5/6/10, your storage performance is certainly limited by your NETWORK, not your hard drives.

UH, no they WON’T! show me the stats of YOUR network while streaming… post pics… so I can point out what you seem to be missing… NETWORK is the ability to transfer data from one point to another… you are NOT sustaining network speeds you are reaching limits of mechanical objects like the DRIVE, that’s not the network…

You do realize that those figures for drives are INTERNAL performance right? That’s not disk to disk… its only simulated not realistic.

And you are DEAD wrong… I do virtualization do you know what that is? You can host several machines on a single machine.

The DRIVE IO is WAY higher than any network throughput and it would take several machines like dozens to even ATTEMPT to saturate the network… Corporate networks are still largely 1G… so explain that!.. show me the PICS you have screen prints, show me at the SAME time how your DRIVES are using up 125 MB of performance… because a DRIVE or RAID can’t maintain that performance before the network runs out its not even possible because of APP overhead and drive delays… anyway PROVE it!

I want to see task manager from the SAME machine showing that your drives are close to MAX while the network utilization is 100%… good luck, it’s impossible but if you think you can defeat physics knock yourself out. A drive can’t have 100% pure data utilization there is SOME overhead and that means network will not be saturated because of simple limitations in software and cache.

Putting things in capital letters does not make them true. You assume far too much about the people you are conversing with.

I understand the numbers perfectly fine. I just ran crystal disk on my work laptop. It is an old Precision M4300 laptop with the original OEM hard drive. Sequential transfers 85MBps read, 74MBps write. 85MBps equates to 680mbps or roughly 68% of the theoretical max for gigabit networking. Remember, this is an OEM hard drive, so fairly slow, and several years old. Fast desktop drives do far better, especially in modern hardware. It is EASY to saturate gigabit network without saturating your bus or hard drives.

For my streaming, I have my media hosted on a NAS with 6 drives. I will certainly not max out my network with a stream or two, but I will also not be maxing out my IO on my NAS either. I have no doubt at all that my NAS storage IO is vastly superior than the network connection is. And my virtual machines (yes, I have a small farm at home so I definitely know what virtualization is) storage is hosted entirely on a NAS filled with SSDs. I will definitely not run out of available IO before I run out of gigabit network.

All I have to do to show you that network runs out before disk performance is copy a file from one NAS to another. You will see the performance PEG around 125MBps as reported by SNMP, task manager, whatever. But then I can start another transfer between the same two NAS units with two more machines (my NAS is bonded, so 2gbps total throughput possible), and I will still be able to maintain 125MBps between those two hosts. From the same storage. So, plenty of headroom for drive performance with gigabit being the limiting factor. See how that all works???

If you want to limit your entire discussion to streaming videos, then so be it. But don’t make blanket statements about what you think performance is or assumptions about who you are talking to. I would hate to throw out things like “I have floppy drives older than you” or some other such things.

@drinehart said:
I would hate to throw out things like “I have floppy drives older than you” or some other such things.

Funny, and true, I have been in IO performance for the better part of… well, too long… this was the first drive i did performance on, and it currently sits on my desk to remind me how old I am. let’s see if you can guess how long that’s been…

Sorry but you just don’t know what you’re talking about @rjparker1. I’ve been working with all types of networks since the beginning of time. I used to do nothing but write device drivers and network libraries for all kinds of communications equipment, from multi-point com ports to X.25 to T3 links and beyond. I have tons of code being used on large telecom routers running all kinds of protocols.

I design and architect very large networks for many Fortune 50 companies and some of the largest systems on the planet like those used by our government agencies (NASA, CIA, NSA).

I’ll say it again, you simply don’t know what you’re saying.

Why are you assuming that you only have to cover the bandwidth of one HDD disk? Why are you assuming it’s a spindle drive and not an SSD? You’re making tons of assumptions trying to prove your right.

Any half way decent size system doesn’t have single drives. They use caching, SSDs, drive dispersments, etc. The IO is broken up over multiple drives or even segments of drives arrays. Drive IO is not going to be the limiting factor.

Even your most basic server software will a single drive or two will use RAM memory to cache the drive and allow you to saturate the single 1GB network card.

I could show you stats from my network equipment to prove a point but I know you wouldn’t understand it. If you can’t do the simple math involved to understand what a network card is capable then you can’t understand the rest. You have to learn to craw before you walk or run.

Let me help you understand and do a test on your own system assuming you have what’s required and it’s very basic equipment.

  1. On your server, copy a directory full of media files from one location to another on your SSD drive. If you have two SSD drives in your server then copy from one drive to the other and time to the partial second how long it takes. Divide the total file sizes moved by the time to get your transfer rate.

Your transfer rate will be well above 125MB a second and probably in the 300MB+ range even with cheap SSD drives.

2). Do the exact same test as above by moving the same set of files across your network from one SSD drive on the server to an SSD drive on your notebook or desktop computer.

Your transfer rate will be limited to 125MB (theoretical).

Now where is the bottleneck? Is it in your disk system or your network system?

@rjparker1 said:

The DRIVE IO is WAY higher than any network throughput and it would take several machines like dozens to even ATTEMPT to saturate the network… Corporate networks are still largely 1G… so explain that!.. show me the PICS you have screen prints, show me at the SAME time how your DRIVES are using up 125 MB of performance… because a DRIVE or RAID can’t maintain that performance before the network runs out its not even possible because of APP overhead and drive delays… anyway PROVE it!

No way. You must work at a small shop if you are using 1GB NICs in your servers. A single user can saturate a 1GB link to a file server quite easily. 1GB is still pretty common at the desktop but long gone in the server world.

Try working with database guys loading data onto servers or having distributed servers and even 10GB NICs barely cuts it.

How do you backup your servers from a central backup if you are only using 1GB links? There isn’t enough time in the day to do backups of anything but the smallest of servers if using 1GB links.

Same with any type of server replication. Simply can’t be done over 1GB.

You’re talking like a guys who manages a server or two in a small 10 to 25 man shop or a small business who is using/storing simple things like word docs or excel files.

I think what @drinehart was referring to was that Plex isn’t trying to send the entire file, rather it breaks it into discrete segments and sends those in a timely manner.

here I am running two movies without even getting close the the theoretical limit of GB

Not discounting @cayars either, clearly he knows his stuff too.. It is clearly easy to saturate a gigabit network from disk.. this is just a basic copy on my raid array.

but that clearly isnt needed by plex to play a single movie or even a couple

Absolutely agree with everything you said @wesman

It’s the blanket statements he said about the NIC that many of us take exception to. People read forums and take things at face value if no one challenges things and shows something to be wrong. Just wanted to make sure the record is clear.

Of course for the average Plex system 1GB is perfectly fine.

For my experince I do like seagate, I have two segates in use today that has power on of more than 7 years, and they still work fine.

All the drives I’ve had from WD have died, every single one. (more than 20)

@cayars said:

@drinehart said:

The DRIVE IO is WAY higher than any network throughput and it would take several machines like dozens to even ATTEMPT to saturate the network… Corporate networks are still largely 1G… so explain that!.. show me the PICS you have screen prints, show me at the SAME time how your DRIVES are using up 125 MB of performance… because a DRIVE or RAID can’t maintain that performance before the network runs out its not even possible because of APP overhead and drive delays… anyway PROVE it!

No way. You must work at a small shop if you are using 1GB NICs in your servers. A single user can saturate a 1GB link to a file server quite easily. 1GB is still pretty common at the desktop but long gone in the server world.

Try working with database guys loading data onto servers or having distributed servers and even 10GB NICs barely cuts it.

How do you backup your servers from a central backup if you are only using 1GB links? There isn’t enough time in the day to do backups of anything but the smallest of servers if using 1GB links.

Same with any type of server replication. Simply can’t be done over 1GB.

You’re talking like a guys who manages a server or two in a small 10 to 25 man shop or a small business who is using/storing simple things like word docs or excel files.

This quote is wrongly attributed to me by the way… it should be attributed correctly to @rjparker1.

Have you guys seen the Intel 750 NVME U.2 SSD? I can get 2.6 gigabytes/s read speed out of it. And that is not even in Raid 0. I’ve seen benchmarks where they have been set up as raid 0, and hitting 3.6gB/s effectively maxing out the PCIe bus. I can guarantee this drive will saturate a network, even a 10 gbe network. Just as intel introduced the affordable SSD 7 or 8 years ago, this will be the new standard soon. And networks will still be far from good enough for another 5-10 years.
What about NIC teaming? Well if you use Windows 10 Pro, forget it until Microsoft releases a fix for this missing capability in Windows 10. Sometime in 2017.
I for one find a 1gB/s network really hampers performance for the type of things I do. At 130mB/s max, I routinely work with up to 50gB of data at a time to a NAS. It’s a low end NAS but I have no doubt that it’s not a limiting factor.

@drinehart said:
This quote is wrongly attributed to me by the way… it should be attributed correctly to @rjparker1.

Sorry about that. Fixed.

@cayars - no worries, it is the odd way nested quotes are handled. I knew you hadn’t meant to do it.

Nothing new to contribute late in the game, but I couldn’t help but add my vote to the masses of other IT professionals pointing out how wrong rjparker1 is in his assertion.

I work for a large institution and have been doing IT professionally for 25+ years. His claims about networking and throughput are horribly incorrect. I advise readers to please instead listen to the wise, educated advise of the numerous people also pointing out he’s wrong.

Thanks for all the informative advice everyone!