Questions about migrating Plex Server to SSD volume on QNAP NAS

Hello fellow Plexians, I need your advice please. My Plex Server runs on a QNAP TS-453Be-8G with a (3) 5TB Raid-5 Legacy Volume I carried over from my old TS-420. Plex runs quite okay and I’m not unsatisfied with the performance, but it isn’t as responsive and fast as my old setup where a SSD-powered Mac mini served as Server and Client for Plex. To speed things up I decided to buy a QNPA expansion card to add two M.2 SSD drives as Raid-1 to my NAS.

1.) I’m curious whether I’d substantially benefit from not only installing Plex onto the SSD’s but also QTS. Besides Plex, TimeMachine backups, some shared folders and some sync jobs my NAS has not a lot else to do, what makes me sceptical. And starting from scratch to get QTS installed on the SSD’s scares me off quite a lot.
2.) In the APP Center QTS offers the ability to migrate a QPKG from one volume to another. Does this also work for the Plex Server QPKG and does this also migrates the Plex Library with all metadata to the destination volume? Anything I should keep in mind?

Any help is very much appreciated, thank you very much!

I think the general idea is for you to add the M.2 raid, and tell QTS that you want to QTier that in with the other RAID5 in one big storage pool.

From that point on, QTS will start moving the most used data over to the SSD array.
It does it automatically every night, and after a few days, all the important QTS files
and Plex metadata plus graphics will end up on it.

You don’t have to make decisions, really, and it’s optimized amazingly well.

As you’re going to build the machine up, I’m guessing QTS 4.4.1 and a fresh install
is the best course. I wouldn’t try it for a month until 4.4.1 proves stable and they patch
the worst bugs with 2 patch releases if not 3. I tend to be conservative with major changes.

Does this sound about like your plan too?

1 Like

To offer a comparison, which I find extremely fast,

  1. QTS installed (yes, fresh install) on the M.2 SSDs (which QTS made into RAID volume as CACHEDEV1
  2. When the main disk array is then re-inserted and QTS booted,
  3. The previous primary becomes CACHEDEV2_DATA
  4. I have CACHEDEV3 & CACHEDEV4. My CACHEDEV3 is a 2.5" SSD dedicated to PMS.
  5. Once installed and instantiated as a single-disk Volume, simple “Migrate To” moved PMS.
  6. I moved the Transcoder Temp back to HDD to avoid unnecessary wear and tear on the SSD.

Now I enjoy 535 MB/s I/O for PMS metadata and DB access.
QTS is on its own volume (1.06 GB/sec due to RAID pair)
HDD’s are more than fast enough for the physical media slugging around. (500+ MB/sec)

If interested, I’ll answer any additional questions and even help you draw out the steps but they pretty much are how I described above.

1 Like

Thank you very much for your responses! QTier sounds promising to get the most value out of the precious SSD space. However, I’m not quite sure if I’m willing to hand over full control over to the machine. Also activating Tier is non-reversible which discourage me a bit. And last but not least, QTier is not available for my Legacy Raid-5 Volume, as fas as I know. Sorry @nibbles, if I’ve caused a misunderstanding in that regard - if it’s possible I’d like to keep my Raid-5 and would like to avoid to setup my NAS from scratch.

@ChuckPa, may I ask, why isn’t your Plex Server installed on your QTS volume, which offers more speed and protection in comparison to a single SSD? Do you have experience with Plex Server on a SSD and QTS on a traditional HDD? Does Plex benefit from QTS being also on an SSD?

Once again, thank you very much and best regards!

On 4.4.1 they made QTier where you can add and remove it to existing storage pools iirc.
But like I mentioned, I’d do a clean install in such a way as to preserve your raid5 data but not the raid array itself. Chuck will help you make the best decisions :slight_smile:

I put my PMS installation out on the expansion 2.5" slot for several reasons:

  1. The mechanical aspect (dismantling) versus backing up, dismounting in software when change is needed, copying the PMS library back (aka. “Migrate to”)
  2. Endurance of the device versus cost.
  3. Functional loading (tasks) I need it for.
    a. Always installing PMS versions and testing them
    b. My own packaging development in the VM need a SSD

Mirrored M.2 SSDs, as created by QTS, will wear out at the same time. The same data is written to both simultaneously. The true performance comes when reading. QTS reads in a striped manner. Measured performance: 535 MB/sec write, 1.05 GB/sec read (1 & 2 below)

Package development and PMS testing speed (building new libraries) is limited by my internet (44 Mbps). 535 MB/sec is more than adequate for this task.

[~] # qcli_storage -Tt force=1
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure  Port  Sys_Name      Throughput    RAID        RAID_Type    RAID_Throughput   Pool  
NAS_HOST   1     /dev/sda      534.47 MB/s   /dev/md1    RAID 1       1.05 GB/s         288   
NAS_HOST   2     /dev/sdb      538.58 MB/s   /dev/md1    RAID 1       1.05 GB/s         288   
NAS_HOST   3     /dev/sde      535.46 MB/s   --          --           --                --    
NAS_HOST   4     /dev/sdc      133.06 MB/s   /dev/md3    Single       138.17 MB/s       290   
NAS_HOST   5     /dev/sdf      531.88 MB/s   /dev/md4    Single       534.26 MB/s       291   
NAS_HOST   6     /dev/sdd      108.57 MB/s   /dev/md5    Single       108.86 MB/s       292   
NAS_HOST   7     /dev/sdh      201.15 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
NAS_HOST   8     /dev/sdg      192.99 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
NAS_HOST   9     /dev/sdn      190.75 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
NAS_HOST   10    /dev/sdm      194.94 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
NAS_HOST   11    /dev/sdk      201.91 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
NAS_HOST   12    /dev/sdl      193.06 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
NAS_HOST   13    /dev/sdi      186.38 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
NAS_HOST   14    /dev/sdj      194.22 MB/s   /dev/md2    RAID 5       1.20 GB/s         289   
[~] # 

The breakdown results are:

  1. Changing the M.
    My TVS-1282 has Samsung 860 1TB M.2 SSDs in it. They are a pain to install.
    Because of all the PMS installations (upgrades, downgrades, adding test data) I do for users, my PMS installation is pretty brutally used.

The performance gained (535 vs 1016 MB/sec) is insignificant compared to the time required to transmit the info from PMS to the client (even at 5 Gbps) and the client to render the display. (the slowest link in the chain)

In their QTS-created RAID configuration (mirror), both are written simultaneously therefore both will wear out at the same time. I

Because of how mirroring works, and both being written to simultaneously, but duplicated, the total observed performance is ~535 MB/sec. (speed comes from reading in a striped manner)

Given the write speed of a 2.5" SSD is the same as the M.2 (both SATA-3), there is no net performance gain when building PMS databases (which I do a lot of)

My evaluation:

  1. Changing a 2.5" SSD in the expansion bay is infinitely easier than pulling out the unit, opening the cover and removing SSDs from the motherboard.
  2. Cost: 2.5" Samsung 860 EVO 1TB is $129 (B&H Camera), M.2 Samsung 860 EVO 1TB is $169.
  3. Performance is the same. Both are SATA-3 devices
  4. Endurance is the same

Final decision for me:

  1. Ease of replacement 5 minutes vs 1 hr
  2. Cost of replacing two SSDs: ($169 each) vs $129
  3. Database read speed difference on single SSD (535 MB/sec) vs CACHEDEV1 RAID SSD (1.05 GB/sec)) is insignificant / noise when compared to the time required to send the data to the player and the player to render that information on the display
  4. Database write speed is identical as both have identical write speeds.

Thanks for showing the drive test.
How much over provisioning are you using on the Samsung 860? I went the about 30% on a different brand based on the QNAP test results.

Are you running the M.2 NVMe type?

When I saw Samsung made an 860 Pro with a longer warranty, I couldn’t decide what to buy.
I’d like to visit B&H some day.

As a Static volume, there is no over provisioning. You can’t have more space than the device offers, true? Therefore, why distort reality?

All my devices are SATA-3. If I had NVME, you would see performance in the 3GB/sec and above range for RAID.

I opted for the 600TBW devices. I am fixed income here so the lifespan vs cost facet is very much a factor. Buy as good as I can which lasts the longest but has the best performance in that class while still meeting my requirements

Screenshot%20from%202019-10-03%2001-20-23

Were you refering to SSD overprovisioning as in only using 400GB on a 500GB SSD to allow for better wear leveling and more consitant performance, or were you refering to Storage over provisioning when your NAS presents more GB of logical storage than you have physical storage to back it up with at this moment?

both can apply :slight_smile:

Yes. I was talking about SSD over-provisioning as run by the QNAP SSD Profiling Tool application and set for my thick volume.

You can set an over-provisioning ratio for SSD caches, Qtier pools, and static volumes (between1% and 60%) in addition to the vendor-defined value.

Provided you don’t use every single block on the device and then abuse a small subset by writing to them all the time, the device’s firmware, in conjunction with monthly FS-trim (essential for long life), you’re good. Just be cognizant. If you want to provision this way so you don’t make a mistake then that’s ok. I know I’m never going to have a 1TB Plex metadata database.

Thank you @ChuckPa for the detailed explanation of your setup. Since my use case isn’t that sophisticated I went for two M.2 SSD as Static Raid-0 Volume for my Plex-installation. Have been lazy and didn’t want to do a fresh QTS-install and therefore left the OTS on the Raid-5 HDD Legacy-Volume. Nevertheless I achieved my goal and Plex is noticeable faster on all my clients (iOS, Web and Shield TV). Loading messages when selecting a movie are very rare now.

Once again, thanks to everyone and best regards!

I have the same problem as you. But I think I will reinstall any QTS. what gave you what for you?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.