Smart SSD Cache for first 30s of each video

Hi there! I use plex as a server at home. We usually only have 1-3 users, so once a hard drive is spinning and providing media, it’s more than enough speed.

However, when a user first goes to play a video, there is a long delay, sometimes as high as 15 seconds where the server has to spin up the hard drive from idle. I can hear it doing this and I can work around it by keeping the disk spinning, but that reduces life.

But I came up with an idea that might help budget plexers like me get some better performance. Most of us know the concept of caching with an SSD, but that requires a NAS setup and isn’t perfect at all. Sometimes it works great and sometimes it doesn’t. However, what if we could implement a smart-cache feature into plex’s library management?

Basically, for every video over, let’s say, 5 minutes long, plex would make a copy of the first 30 seconds or so of the video to an SSD dedicated to “caching” the content. When a user then goes to play the video, it would play from the SSD while spinning up the hard drive, then transition over to that file.

I’m not sure how to streaming works under the hood. This might be very hard to implement, but I know in theory it should work. Another possible downside is if the file goes missing on the HDD. The user would initially be able to watch the content, but after 30 seconds or so be told the file is missing, which might be jarring.

What are the thoughts of the devs and community? Would this be useful? Could it be added to Plex? Does something like this already exist with system linking on Linux or something? Thanks!

No, it doesn’t. Actually what reduces the life span of a mechanical disk drive is having the platters to constantly spinning up and down.

2 Likes

Interesting! The method I had previously used was writing and removing a file every few seconds. Is there another method you’d recommend, though?

Recommend for what? IMHO this whole feature request is not practical at all.
You seem to forget that the transcoding parameters can change for each individual client type.
Now factor in the different bandwidth restrictions each player may or may have not set in its preferences.
Now add the difficulty that your server can also be accessed by several different clients at the same time.
And depending on which library view mode each of these different clients may be using, the number of potentially to-be-precached videos goes into the hundreds.

That’s only if you have the space to optimize every video, which many of us do not. I only have about 6TB of storage. You could just have the first 30 seconds in the same file type as the main file.

I only have one copy of each video saved and it transcodes it on the fly, no optimized versions since it makes no noticeable difference for me and takes up too much space. Why wouldn’t it be faster for it to transcode it off an SSD instead of an HDD?

If there’s something I want to watch soon, Ill often store it on a usb 2.0 flash drive because it instantly plays. Anything on my HDD takes a while to load.

Indeed…

The thing that will actually wear a drive faster is this kind of behavior…

Disk is asleep > Spins up due to request > Reaches normal operating temperature > Spins down after a period of inactivity > Cools down to below normal operating temperature > Repeat…

Physical hardware expands and contracts with variation in temperature, and that behavior will actually put undue stress and wear on the motor bearings and contacts.

Certain drives are actually designed to run 24/7 > NAS drives…

@lukakamaps_gmail_com … Turn off your power saving and allow the drives to remain active 24/7. They will actually prefer it!

One of the ways that you could improve the performance of transcoding, would be to add more RAM into the machine, create a RAM drive, and move the Plex transcode location to the RAM drive.

With the amount of users you say you have, a 16GB RAM drive should be adequate for your needs.

And now when a file needs to be transcoded, it would occur like this…

Video is read from the HD storage > Transcoded in CPU > Transcoded chunks are written to RAM Drive > Transcoded chunks are read from RAM drive and streamed to the user.

Very efficient :+1:

If I may augment here?

Most of the newer HDDs will provide 200 MB/sec of data transfer.
A NAS volume will provide much more (400+ MB/sec).

At 200 MB/sec, 200 * 8 (bits/byte) * 1.2 (Plex + TCP overhead) = 1,920 Mbps.

Do you have 2+ Gbps streaming upload demand where a single HDD is insufficient?

1,920 Mbps == 26x 60 Mbps videos playing simultaneously.

Actually @ChuckPa, I think the OP’s original request was based on his experience of the large delay before a stream actually starts, rather than once the stream is running.

Oh, and the figures you are quoting are a little on the optimistic side. You might get those figures for contiguous linear reads, with a best case scenario and entirely unfragmented data, however in reality most drives on “most” peoples machines would never achieve that.

1 Like

Exactly this.

I’ll be testing some setups once I have everything set up, and Ill share any results and solutions I have here, but I think my initial request can be ignored tbh.

Thanks to everyone for the input and clear explanations!

1 Like

Thank you for pointing that out.

Delays at startup are subject to a whole bunch of things – almost NONE of which relate to RAM/Cache.

  1. Read speed of the media
  2. Can the media be direct played
  3. If transcoding is required
    – Is there transcoding hardware for the video ?
    – Does the audio need conversion?
    – (and the biggest problem) Do subtitles need to be burned into the video?

SSD cache won’t help any of these situations.

Reading a random media file from the HDD is the innate latency of the file system/ LAN & NAS. The file won’t be in SSD cache so there is no net gain.

Indeed, I would agree with that, however I would definitely add a little wisdom to that…

If you can make a number of little improvements here and there, they can all add up to a significant difference.

A bit more RAM, RAM Drive for Transcoding, everything on Gigabit LAN or above, unfragmented source videos, faster data storage - Perhaps RAID10, Anti-Virus exceptions for important areas / known file types.

The list goes on… Each little improvement might not seem to make much difference, but by the time you have tweaked this and that, the overall difference can be huge.

Oh… AV exceptions is a MUST. Simply no need having your AV scan MKV’s MP4’s etc, as well as your transcode areas, the Plex executables… These tweaks actually can make a big difference.

If you’re speaking about Windows? Oh heck yes.

This is the Linux forum and I’m speaking about Linux in general.

I have 100TB of media. No way can that be defragmented. Defragmenting any Linux filesystem really is an exercise in futility. The best you can get is locality of reference when you load the disk/volume the first time. In that case, files will be in the same / adjacent inode groups.

Even if splattered across the drive, you’re talking milliseconds.

Now, All these ‘small changes’ don’t add up to enough to compensate for
– Insufficient CPU/HW bandwidth
– Insufficient network bandwidth
– Poorly curated media. (a BIG problem for most users)

1 Like

To give a real-world example:

  1. I have two NUC8i7HVK PMS servers. Both have 10GbE ethernet
  2. The LAN is full 10 GbE based (24 port 10G switch)
  3. The NAS volume (20 Gbe) delivers 1.4 GB/sec from the file system. (saturates the wire)

The only time I see slowness when playback starts is when I’ve messed up ripping the BluRay. I define slowness as requiring more than 5 seconds to start.

1 Like

Yeah… Not entirely!

I’ve always questioned the linux crowds stance on file fragmentation. I’ve heard arguments such as … “*UX systems just handle it better”, but I just don’t buy that!

Here’s a scenario… You are pulling a bunch of files from the internet via various means, and at the same time, are doing a file dump of freshly converted content… In this scenario, how well the files get laid down on the disk will not just be dependent on the OS, but also the software, and it is very likely that these files will be literally scattered into thousands and thousands of chunks all over the drive.

As far as I can tell, regardless of the OS, that will absolutely have an impact on Disk I/O when you are trying to read them back.

In regards to the small changes…

These are the things that I had forgotten to add to my list, although I did state…

Now to reference your real-world example…

Much like me, you are a geek, and as such, have filled your kit with high performance gear, but start replacing that with old 10/100 switches, poor 2.4Ghz wireless, old 5400 rpm WD blues, and you are gonna be in a world of pain.

You have simply done exactly what I was suggesting, but to a much larger degree and with obviously good results :smiley: :+1:

EDIT: And now for dinner !! lol

“UX” file systems were designed/created for “Big Data”.

I run XFS on a 100TB array (11x 12TB RAID 6)

[chuck@glockner ~.1998]$ xfs_info /mnt/vol
meta-data=/dev/md3               isize=512    agcount=99, agsize=268435200 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=26367194880, imaxpct=1
         =                       sunit=256    swidth=2304 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[chuck@glockner ~.1999]$ df -h /mnt/vol
Filesystem      Size  Used Avail Use% Mounted on
/dev/md3         99T   45T   54T  46% /mnt/vol
[chuck@glockner ~.2000]$

If you’re going to do it – Do it right the first time.

It’ll be a LONG time before 10 GbE in the home is insufficient.

2 Likes

Still don’t mean you can programmatically defy the laws of physics… A disk will only spin so fast, and a read arm will only move so fast, and if you have to wait for the disk to spin around just to pick up another few bytes of data because its not right next to the last bit you needed, then your disk I/O will be a bottleneck.

I’ll have to look into this more for sure…

Damn!.. That’s a shed loada’ disks!

I’m currently running 2x RAID10 arrays, with 4x 6TB and 4x 8TB housed in an 8 bay USB3 RAID box, giving me 28 TB of raw storage, 25.4 TB of usable.

The RAID box states it’s capable of 5 Gigabits /s (approx 640 MBytes/s) but at best, I’m getting 194 MBytes /s Reads.

I’m pretty sure my bottleneck is either the cheap USB3 card, or perhaps the 9 year old HP ProLiant ML350 Gen6 motherboard it’s connected to.

I’m kinda promising myself a new PMS at some point later this year, but first I need to replace the front speakers of my home theater rig.

A pair of Wharfdale EVO 4.4’s is on the shopping list for March…

RAID6 is also something else I need to look into. I currently swear by RAID10 and will bite my own arm off to avoid RAID5, but RAID6 does sound interesting.

I don’t like RAID 10. Striped mirror means that if one faults, how do I know which is the correct one? There is no parity to figure it out.

In RAID 10, thinking of a 2 x 2 example (A1, B1, A2, B2), you can lose either one of the A’s and either one of the B’s and still recover. However, should you lose A1 & A2, it’s game over for the volume

I ran RAID 5 for a LONG time and NEVER lost a volume or had bad data.
I’ve had RAID 10 (RAID 1 + 0) give bad data. Writing to it and have an abrupt power off – Now you don’t know which disk(s) to believe.

I now use RAID 6 because of the time it takes to back this thing up.
(I have a RAID 5 mirror on the QNAP of this entire volume. The QNAP is OFF until needed)

RAID 6 gives me the protection I want in the case where, if a second drive fails while I’m rebuilding from the first drive failure.

This event is VERY unlikely given the class drives I’ve purchased but I’m still ‘planning for the worst’.

1 Like

That’s a concern for single-disk systems, too.

ZFS and BTRFS checksum all blocks written to disk for this reason.

The “double fault during rebuild” scenario is brutally painful. Bigger and bigger disks make it even more important to have +2 parity.

I haven’t pulled the wrong drive out of a degraded array for years and years now, thank goodness.

Pro tip: when building an array, make sure to use drives from different production charges, manufacturing facilities, or even brands.
Thus reducing the chance that all drives from a certain manufacturing charge are giving up their ghost at roughly the same time.
The added stress when rebuilding a degraded array is amplifying this effect.

@Volts

Cockpit will show me which drive (by S/N) has failed.

This is why I have labeled each drive with the last 4 digits of the S/N when I was building.

(This image is with 10 drives. The system now has 11 HDDs.)

Here’s the current config:

l[chuck@glockner ~.1997]$ lsblk -f
NAME    FSTYPE            LABEL           UUID                                 FSAVAIL FSUSE% MOUNTPOINT
loop0   squashfs                                                                     0   100% /snap/core18/2253
loop1   squashfs                                                                     0   100% /snap/core18/2284
loop2   squashfs                                                                     0   100% /snap/snapd/14295
loop3   squashfs                                                                     0   100% /snap/snapd/14549
loop4   squashfs                                                                     0   100% /snap/lxd/21803
loop5   squashfs                                                                     0   100% /snap/lxd/21835
loop6   squashfs                                                                     0   100% /snap/core20/1328
loop7   squashfs                                                                     0   100% /snap/core20/1270
sda                                                                                           
├─sda1  vfat                              F26B-2C4A                             505.8M     1% /boot/efi
└─sda2  linux_raid_member ubuntu-server:0 040616f2-c379-1c09-af09-b27a38404888                
  └─md0 xfs                               b437a784-90ce-4677-8081-d44360e8c099  214.1G     8% /
sdb                                                                                           
├─sdb1  vfat                              F289-FEE1                                           
└─sdb2  linux_raid_member ubuntu-server:0 040616f2-c379-1c09-af09-b27a38404888                
  └─md0 xfs                               b437a784-90ce-4677-8081-d44360e8c099  214.1G     8% /
sdc                                                                                           
├─sdc1  vfat                              F2A9-3357                                           
└─sdc2  linux_raid_member ubuntu-server:0 040616f2-c379-1c09-af09-b27a38404888                
  └─md0 xfs                               b437a784-90ce-4677-8081-d44360e8c099  214.1G     8% /
sdd                                                                                           
├─sdd1  vfat                              F2C7-F88C                                           
└─sdd2  linux_raid_member ubuntu-server:0 040616f2-c379-1c09-af09-b27a38404888                
  └─md0 xfs                               b437a784-90ce-4677-8081-d44360e8c099  214.1G     8% /
sde                                                                                           
├─sde1  zfs_member        default         14739127432687578001                                
└─sde9                                                                                        
sdf     linux_raid_member glockner:1      729979aa-f255-bc2f-2ce9-79b8087cbdbd                
└─md1   xfs               home            bb93809e-9f83-4c04-95e9-f519eff64dc3  927.9G     0% /home
sdg     linux_raid_member glockner:1      729979aa-f255-bc2f-2ce9-79b8087cbdbd                
└─md1   xfs               home            bb93809e-9f83-4c04-95e9-f519eff64dc3  927.9G     0% /home
sdh     linux_raid_member glockner:1      729979aa-f255-bc2f-2ce9-79b8087cbdbd                
└─md1   xfs               home            bb93809e-9f83-4c04-95e9-f519eff64dc3  927.9G     0% /home
sdi     linux_raid_member glockner:1      729979aa-f255-bc2f-2ce9-79b8087cbdbd                
└─md1   xfs               home            bb93809e-9f83-4c04-95e9-f519eff64dc3  927.9G     0% /home
sdj     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdk     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdl     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdm     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdn     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdo     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdp     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdq     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdr     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sds     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
sdt     linux_raid_member glockner:3      93c28f54-70e9-934f-1310-327baef0370d                
└─md3   xfs               vol             317bddcd-4a71-45c3-addc-a7bf47a6cffc   53.8T    45% /mnt/vol
nvme1n1 linux_raid_member glockner:2      d6ae6d3f-2bb9-434e-bede-dbc7cb87d5db                
└─md2   xfs               vmssd           d9603f57-6824-4a6e-81e4-edf64f654634    1.2T    35% /mnt/vmssd
nvme0n1 linux_raid_member glockner:2      d6ae6d3f-2bb9-434e-bede-dbc7cb87d5db                
└─md2   xfs               vmssd           d9603f57-6824-4a6e-81e4-edf64f654634    1.2T    35% /mnt/vmssd
[chuck@glockner ~.1998]$
1 Like