Hi there! I use plex as a server at home. We usually only have 1-3 users, so once a hard drive is spinning and providing media, it’s more than enough speed.
However, when a user first goes to play a video, there is a long delay, sometimes as high as 15 seconds where the server has to spin up the hard drive from idle. I can hear it doing this and I can work around it by keeping the disk spinning, but that reduces life.
But I came up with an idea that might help budget plexers like me get some better performance. Most of us know the concept of caching with an SSD, but that requires a NAS setup and isn’t perfect at all. Sometimes it works great and sometimes it doesn’t. However, what if we could implement a smart-cache feature into plex’s library management?
Basically, for every video over, let’s say, 5 minutes long, plex would make a copy of the first 30 seconds or so of the video to an SSD dedicated to “caching” the content. When a user then goes to play the video, it would play from the SSD while spinning up the hard drive, then transition over to that file.
I’m not sure how to streaming works under the hood. This might be very hard to implement, but I know in theory it should work. Another possible downside is if the file goes missing on the HDD. The user would initially be able to watch the content, but after 30 seconds or so be told the file is missing, which might be jarring.
What are the thoughts of the devs and community? Would this be useful? Could it be added to Plex? Does something like this already exist with system linking on Linux or something? Thanks!
Recommend for what? IMHO this whole feature request is not practical at all.
You seem to forget that the transcoding parameters can change for each individual client type.
Now factor in the different bandwidth restrictions each player may or may have not set in its preferences.
Now add the difficulty that your server can also be accessed by several different clients at the same time.
And depending on which library view mode each of these different clients may be using, the number of potentially to-be-precached videos goes into the hundreds.
That’s only if you have the space to optimize every video, which many of us do not. I only have about 6TB of storage. You could just have the first 30 seconds in the same file type as the main file.
I only have one copy of each video saved and it transcodes it on the fly, no optimized versions since it makes no noticeable difference for me and takes up too much space. Why wouldn’t it be faster for it to transcode it off an SSD instead of an HDD?
If there’s something I want to watch soon, Ill often store it on a usb 2.0 flash drive because it instantly plays. Anything on my HDD takes a while to load.
The thing that will actually wear a drive faster is this kind of behavior…
Disk is asleep > Spins up due to request > Reaches normal operating temperature > Spins down after a period of inactivity > Cools down to below normal operating temperature > Repeat…
Physical hardware expands and contracts with variation in temperature, and that behavior will actually put undue stress and wear on the motor bearings and contacts.
Certain drives are actually designed to run 24/7 > NAS drives…
@lukakamaps_gmail_com … Turn off your power saving and allow the drives to remain active 24/7. They will actually prefer it!
One of the ways that you could improve the performance of transcoding, would be to add more RAM into the machine, create a RAM drive, and move the Plex transcode location to the RAM drive.
With the amount of users you say you have, a 16GB RAM drive should be adequate for your needs.
And now when a file needs to be transcoded, it would occur like this…
Video is read from the HD storage > Transcoded in CPU > Transcoded chunks are written to RAM Drive > Transcoded chunks are read from RAM drive and streamed to the user.
Actually @ChuckPa, I think the OP’s original request was based on his experience of the large delay before a stream actually starts, rather than once the stream is running.
Oh, and the figures you are quoting are a little on the optimistic side. You might get those figures for contiguous linear reads, with a best case scenario and entirely unfragmented data, however in reality most drives on “most” peoples machines would never achieve that.
I’ll be testing some setups once I have everything set up, and Ill share any results and solutions I have here, but I think my initial request can be ignored tbh.
Thanks to everyone for the input and clear explanations!
Delays at startup are subject to a whole bunch of things – almost NONE of which relate to RAM/Cache.
Read speed of the media
Can the media be direct played
If transcoding is required
– Is there transcoding hardware for the video ?
– Does the audio need conversion?
– (and the biggest problem) Do subtitles need to be burned into the video?
SSD cache won’t help any of these situations.
Reading a random media file from the HDD is the innate latency of the file system/ LAN & NAS. The file won’t be in SSD cache so there is no net gain.
Indeed, I would agree with that, however I would definitely add a little wisdom to that…
If you can make a number of little improvements here and there, they can all add up to a significant difference.
A bit more RAM, RAM Drive for Transcoding, everything on Gigabit LAN or above, unfragmented source videos, faster data storage - Perhaps RAID10, Anti-Virus exceptions for important areas / known file types.
The list goes on… Each little improvement might not seem to make much difference, but by the time you have tweaked this and that, the overall difference can be huge.
Oh… AV exceptions is a MUST. Simply no need having your AV scan MKV’s MP4’s etc, as well as your transcode areas, the Plex executables… These tweaks actually can make a big difference.
This is the Linux forum and I’m speaking about Linux in general.
I have 100TB of media. No way can that be defragmented. Defragmenting any Linux filesystem really is an exercise in futility. The best you can get is locality of reference when you load the disk/volume the first time. In that case, files will be in the same / adjacent inode groups.
Even if splattered across the drive, you’re talking milliseconds.
Now, All these ‘small changes’ don’t add up to enough to compensate for
– Insufficient CPU/HW bandwidth
– Insufficient network bandwidth
– Poorly curated media. (a BIG problem for most users)
I have two NUC8i7HVK PMS servers. Both have 10GbE ethernet
The LAN is full 10 GbE based (24 port 10G switch)
The NAS volume (20 Gbe) delivers 1.4 GB/sec from the file system. (saturates the wire)
The only time I see slowness when playback starts is when I’ve messed up ripping the BluRay. I define slowness as requiring more than 5 seconds to start.
I’ve always questioned the linux crowds stance on file fragmentation. I’ve heard arguments such as … “*UX systems just handle it better”, but I just don’t buy that!
Here’s a scenario… You are pulling a bunch of files from the internet via various means, and at the same time, are doing a file dump of freshly converted content… In this scenario, how well the files get laid down on the disk will not just be dependent on the OS, but also the software, and it is very likely that these files will be literally scattered into thousands and thousands of chunks all over the drive.
As far as I can tell, regardless of the OS, that will absolutely have an impact on Disk I/O when you are trying to read them back.
In regards to the small changes…
These are the things that I had forgotten to add to my list, although I did state…
Now to reference your real-world example…
Much like me, you are a geek, and as such, have filled your kit with high performance gear, but start replacing that with old 10/100 switches, poor 2.4Ghz wireless, old 5400 rpm WD blues, and you are gonna be in a world of pain.
You have simply done exactly what I was suggesting, but to a much larger degree and with obviously good results
Still don’t mean you can programmatically defy the laws of physics… A disk will only spin so fast, and a read arm will only move so fast, and if you have to wait for the disk to spin around just to pick up another few bytes of data because its not right next to the last bit you needed, then your disk I/O will be a bottleneck.
I’ll have to look into this more for sure…
Damn!.. That’s a shed loada’ disks!
I’m currently running 2x RAID10 arrays, with 4x 6TB and 4x 8TB housed in an 8 bay USB3 RAID box, giving me 28 TB of raw storage, 25.4 TB of usable.
The RAID box states it’s capable of 5 Gigabits /s (approx 640 MBytes/s) but at best, I’m getting 194 MBytes /s Reads.
I’m pretty sure my bottleneck is either the cheap USB3 card, or perhaps the 9 year old HP ProLiant ML350 Gen6 motherboard it’s connected to.
I’m kinda promising myself a new PMS at some point later this year, but first I need to replace the front speakers of my home theater rig.
A pair of Wharfdale EVO 4.4’s is on the shopping list for March…
RAID6 is also something else I need to look into. I currently swear by RAID10 and will bite my own arm off to avoid RAID5, but RAID6 does sound interesting.
I don’t like RAID 10. Striped mirror means that if one faults, how do I know which is the correct one? There is no parity to figure it out.
In RAID 10, thinking of a 2 x 2 example (A1, B1, A2, B2), you can lose either one of the A’s and either one of the B’s and still recover. However, should you lose A1 & A2, it’s game over for the volume
I ran RAID 5 for a LONG time and NEVER lost a volume or had bad data.
I’ve had RAID 10 (RAID 1 + 0) give bad data. Writing to it and have an abrupt power off – Now you don’t know which disk(s) to believe.
I now use RAID 6 because of the time it takes to back this thing up.
(I have a RAID 5 mirror on the QNAP of this entire volume. The QNAP is OFF until needed)
RAID 6 gives me the protection I want in the case where, if a second drive fails while I’m rebuilding from the first drive failure.
This event is VERY unlikely given the class drives I’ve purchased but I’m still ‘planning for the worst’.
Pro tip: when building an array, make sure to use drives from different production charges, manufacturing facilities, or even brands.
Thus reducing the chance that all drives from a certain manufacturing charge are giving up their ghost at roughly the same time.
The added stress when rebuilding a degraded array is amplifying this effect.