Constant buffering

Are those the MergerFS drives?

If they are… move the files to a different drive (something non-merger) and retest

Yeah, everything is on the MergerFS ‘drive’ its a hit or miss if it works. You think that MergerFS is causing some issues? I have used it for a while but its been updated so maybe there is a bug or something.

Yeah… I’ve seen mergerfs cause lots of problems.

you’re right – Hit or Miss … This might be a miss.

Copy some episodes / files you know have problems to a proper location (even the SSD if you want).

Create a small test section and try it.

Hmph ok. I have noticed that I have HDD’s that end up with filesystem issues pretty regularly. But I dont have a RAID, and it seemed this was a best option. Do you have a better suggestion for uniting drives? I am open to better / more reliable suggestion that would allow other things like Sonarr that doesn’t support multiple paths would work well with it. Which is why I moved it to MergerFS in the first place.

IMHO —

MergerFS is “poor man’s software RAID”. It’s ok for experimenting / test beds but you want production stability.

RAID is my Go-To and this is my beast. I started with 3x 4TB drives 8 years ago and it was a screamer. It’s scaled perfectly since then even with a few HDD failures along the way – never any data loss :slight_smile:

[chuck@glockner ~.1998]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            126G     0  126G   0% /dev
/dev/md0        233G   19G  214G   8% /
/dev/sda1       511M  5.3M  506M   2% /boot/efi
/dev/md2        1.9T  671G  1.3T  36% /mnt/vmssd
/dev/md1        931G  3.0G  928G   1% /home
/dev/md3         99T   45T   54T  46% /mnt/vol
[chuck@glockner ~.1999]$

Here are the drives which comprise it.

┌┈▶ sdj     running sas    4:0:4:0    HGST_HUH721212AL        W925 5000cca291ea9801 disk    10.9T linux_raid_member                                               
├┈▶ sdk     running sas    4:0:5:0    HGST_HUH721212AL        W925 5000cca291e7dedd disk    10.9T linux_raid_member                                               
├┈▶ sdl     running sas    4:0:6:0    HGST_HUH721212AL        W925 5000cca291e1d226 disk    10.9T linux_raid_member                                               
├┈▶ sdm     running sas    4:0:7:0    HGST_HUH721212AL        W925 5000cca291dee455 disk    10.9T linux_raid_member                                               
├┈▶ sdn     running sas    4:0:8:0    HGST_HUH721212AL        W925 5000cca291e7d217 disk    10.9T linux_raid_member                                               
├┈▶ sdo     running sas    4:0:9:0    HGST_HUH721212AL        W925 5000cca291db0daf disk    10.9T linux_raid_member                                               
├┈▶ sdp     running sas    4:0:10:0   HGST_HUH721212AL        W925 5000cca291e8089f disk    10.9T linux_raid_member                                               
├┈▶ sdq     running sas    4:0:11:0   HGST_HUH721212AL        W925 5000cca2b0d294ef disk    10.9T linux_raid_member                                               
├┈▶ sdr     running sas    4:0:12:0   HGST_HUH721212AL        W925 5000cca291e1dbd6 disk    10.9T linux_raid_member                                               
├┈▶ sds     running sas    4:0:13:0   HGST_HUH721212AL        W925 5000cca291eaddc3 disk    10.9T linux_raid_member                                               
└┬▶ sdt     running sas    4:0:14:0   HGST_HUH721212AL        W925 5000cca291e8040e disk    10.9T linux_raid_member                                               
 └┈┈md3                                                                             raid6   98.2T xfs               /mnt/vol           98.2T   53.8T  44.4T    45%                                                              raid6   98.2T xfs               /mnt/vol           

Here’s the kicker.

  1. Sonarr writes directly to this storage
  2. When I rip a disc, It writes over the network to it.
  3. I get 1.5 GB/sec throughput from this array.

I also have a QNAP NAS and a Synology NAS.

If you’re thinking of RAID / a better I/O solution

  1. Gigabit from the PMS host to the NAS – works fine for now (117 MB/sec) which will give you 800 Mbps of video streaming (that enough) ?
  2. The load is balanced and data protected (RAID 5/RAID 6).
  3. You can expand it over time (grow as you need if you plan ahead a bit)

My PMS hosts are two NUC8-i7-HVK boxes. One for Plex testing, one for production.

My whole LAN is 10 GbE but that’s only because I need this level of performance with all the data I throw around daily.

Thats totally fair. And I have thought about moving to a RAID but it sounds like a daunting task. And also I have a 10TB disk and IIRC that means I am going to loose a bunch of “unusable space” which was something that aggravated me about RAID’s that I couldn’t mix and match drive sizes. I also figured that I dont have hundreds of users so I wasn’t too concerned with the speed gain from striping and if I did a RAID0 that would support different drive sizes I get no parity so I felt that I didnt gain that much from RAID. But maybe its time to start looking into it.

Tell me what you have for disks in this?

RAID == Redundant Array of Independent Disks (R.A.I.D)

I’ll help you craft this out but you can’t use a single 10TB drive :slight_smile:
(Notice how many I have?)

You can do RAID 0 (concatination) , with no protection but it’s striped. Lose one and you lose it all.

I have 4 disks:

NAME        MOUNTPOINT                    LABEL        SIZE FSTYPE   UUID
sdc                                                   14.6T          
└─sdc1      /Volumes/Media 4              Media 4     14.6T ext4     e97685d9-d13d-416e-ba8b-25ebf7afce23
sdd                                                   12.8T          
└─sdd1      /Volumes/Media                Media       12.8T ext4     2fdf80cd-851e-42fe-a4a4-b08f5000f501
sde                                                   14.6T          
└─sde1      /Volumes/Media 3              Media 3     14.6T ext4     f062d7fd-37f1-4d73-8c0a-9d116d360c56
sdf                                                    9.1T          
└─sdf2      /Volumes/Media 2              Media 2      9.1T ext4     31c264c2-ef31-45e0-a28b-0739571f86f8

2 16TB drives
1 14TB drive
1 10TB drive

Something that just happened, I was in PMS and noticed that a friend was buffering an episode so out of curiosity I went to play it as well, it loaded instantly for me but he kept buffering (verified via text) so I thought that is kinda interesting that I could load it and he could not. It was a direct play at 16Mbps (I have 1Gbps fiber so bandwidth shouldn’t be an issue ever)

Just because you have 1 Gbps fiber… What does he have ?

What’s end-to-end throughput?

Yeah this what I am hesitant to do… I would love to concat the drives but I dont want to be in a total loss situation.

Where I have a problem with MergerFS.

It’s a layer on top of a bunch of single drives.

  1. Is there any protection?
  2. Is there any performance gain?
  3. what does it really give you? (just a big honking volume or are files limited to whatever fits on the target?)

Checking… He lives near me in SF so im pretty sure that its plenty but I asked to be sure.

Use iperf3 .

Open the port … run it both ways

1 Like
  1. No protection, which I get. But worst case I lose one drive data and can recover what I want. Probably a ton of stuff that never gets watched anyone. And 25% loss is much better than 100% which I would have in RAID0
  2. Not really I suppose. I mean different files from different drives I guess would be faster theoretically I guess.
  3. What I liked about it is well pretty much what you said. I can have similar structured folders on drives together and it makes them all concat’d which I liked.

If there is a way to get something pretty similar to this that wouldn’t waste a ton of space, even if its with a parity, I am open to it I think. Just will need to probably figure out how to get the data moved off and back on without losing anything. Which sounds like a big task haha

Haha hes a non techie…- lets see what I can get done.

FWIW he tried a different episode and it worked and went back and it started working fine so hard to say exactly there I guess.

One side question, do you see any problem with having my transcodes go to RAM (/dev/shm)?

To save wear on the SSD — Use RAM. PMS will trim as it needs

I have 64 GB in the NUC. 32 GB is the default /dev/shm amount. More than enough for 4 transcodes

Awesome- good to know. Yeah SSD wear is why I moved it there and I never saw any issues but wasn’t sure if there was something I wasn’t considering. I have 64GB in the server as well and typically dont use over 10GB so should have plenty for my capacity I need.

exactly. The OS always runs minimal. PMS itself is lean except for spikes during scanning.

Need to call it a night here.

Here is some testing info. Good to start learning overnight.

mtr will give you a basic level of data… from his IP to the IP of your server.

https://www.ateam-oracle.com/post/testing-latency-and-throughput

Appreciate all the help as always, Chuck. I owe ya a Starbucks drink or something sometime.