Server optimization

Hello folks,

I’ve been running plex successfully for many years now :). I’m looking for feedback regarding the server performance and other hardware recommendations.

Let me give you my library specs:

About 5000 movies (split between 2 libraries) and over 1600 tv shows. The movies are split between 2 libraries, one for high quality encodes and the other one for lower quality ones. Mind you there are duplicates between these and one has more than the other.

The problem I am facing is that clients sometimes timeout when browsing the library.

Mind you, my server has at most 5-6 concurrent streams so it can’t be because of that (I think the maximum was 8) - so it’s not that busy.

For a few years, my main plex server is running on an ancient atom box - a celeron J3455.
The box has 16GB of RAM. The reason why I’m running on this particular box is due to its gpu - it can decode hevc.

Recently, I have upgraded to a different system, that has an i7 1165G7 - which is way more powerful than the old box with 32 GB of RAM.

The performance has marginally improved, but not as much as I expected.

My working theory is that the plex database has become corrupted after running for so many years, to that end I am rebuilding it on another server, but without any thumbnails or intro markers.

If this doesn’t work, I will try to create a RAM disk and put the plex database there - it will be tricky but hopefully I’ll get it to work.

Are there any things to try? Is there any other way to get the plex db loaded into memeory?

Seems my database wasn’t corrupt. The size came from the intro detection data.
I took the next step and stored the database in a RAM disk. This actually seems to have had the best results. It scares me because I initially added 4 GB of ram (for my 3GB database), I then increased it to 8 GB, because of the write ahead log files. Is there any way to disable that now?

-rw-r--r--  1 plex plex  2.2G Oct  6 05:10 com.plexapp.plugins.library.blobs.db
-rw-r--r--  1 plex plex   32K Oct  6 06:32 com.plexapp.plugins.library.blobs.db-shm
-rw-r--r--  1 plex plex  2.3G Oct  6 06:25 com.plexapp.plugins.library.blobs.db-wal
-rw-r--r--  1 plex plex  690M Oct  6 06:37 com.plexapp.plugins.library.db
-rw-r--r--  1 plex plex   96K Oct  6 06:37 com.plexapp.plugins.library.db-shm
-rw-r--r--  1 plex plex  706M Oct  6 06:37 com.plexapp.plugins.library.db-wal

Turns out those WAL files were so large because the files got corrupted. Restored them from an earlier backup and all is good.
Funny thing is, I still get “[Req#497d] Waited one whole second for a busy database.” - this translates into a timeout on the client (loading some media item) or a loading screen.

Don’t know if you still need some performance help, but something that helped me is putting the the sqlite server on faster disk. I have my media library on spinning disks, but I put the sqlite db on nvme and that greatly improves accesses and searches. I really wish I could put the DB on MariaDB or PostgreSQL or something, because I’m definitely started to feel a performance bottleneck on DB queries.

Thanks - I had the db on a ssd/nvme for a while now. I just moved to a ramdisk (which is stored in RAM) this should be the fastest.

You are brave and are living the dream. I considering suggesting a ramdisk, but I didn’t want to push you toward a risky solution in case you weren’t ready for that risk.

How is your performance (specifically search/library loading), then? Does the ramdisk make a big difference? I’ve been a little too scared to try it, but tales of high performance might win me over.

And did the WAL rebuild and move to ramdisk fix your timeouts?

It improved it a lot, but they still happen.
I have a UPS. I also stumbled across some scripts that persist the ramdisk. It listens for inotify events and whenever a file is closed it is copied to the disk.

The corruption I faced was due to my testing the ramdisk :D, previously when I had it on the ssd I just speculated - turned out I was wrong.

Don’t know if you’re interested, but for a small number of files, the overhead of monitoring is easier (IMO) using systemd path units (systemd.path). I have it set to sync my Movies when the file is closed for writing. That might not work for your use case because I doubt the DB is ever closed for writing, but I think you can have it trigger on blocks appended to disk. It’s still using inotify, so don’t expect any performance benefit there, but it might have less overhead on your bash script since systemd is already keeping track of eventing.

The advantage to this approach, I find, is that I can write the sync script as a oneshot service, then I can trigger it both on a .path unit (when a file changes) and a .timer unit (after a set period of time), and I don’t have to include that logic into a persistent script.

Obviously this solution scales very poorly for deeply nested directories or a large number of folders, but for your usecase, it might work.

Food for thought.

To all:

I have a script-tool which does DB repair/cleanup

I wrote it because

  1. PMS will go ‘weird’ with WAL / SHM getting stupid large for no apparent reason
  2. inexplicable crashes & hangs.
  3. Updates taking forever to complete.
  4. Various other issues like those discussed above.

I currently autodetect Linux workstation/server and a few NAS boxes (QNAP & Syno are most stable).

I’ve not stabilized it working inside a running PMS-docker container with PMS stopped .

It’s fixed everything tried so far related to DB problems.

If interested, please let me know and I’ll share via PM.

Please do share :slight_smile: About running it a container - perform small hack: I mount the config in another path and let plex run with a brand new ephemeral config, then I can use the db tools inside the container to manipulate the database.

Hi. Where can I obtain your tool to repair the DB, I’d love to get this problem solved as its been driving me insane.

I tried PM’ing you but it says you aren’t accepting messages at the moment.

Thanks!

Would be great if you could send me your script, many thanks in advance!

Right now it syncs the files every time a wal file is close :slight_smile: - I will probably have to make changes to the script and run rsync on a schedule about half an hour.
I do create database backups every 3 days stored outside of the ramdisk - so the risks of massive data loss are minimal.
I lost my wife’s progress on some shows - I can live with that :smiley:

If I may add some info?

When PMS performs a proper shutdown AND there is no database damage –

There will be no WAL or SHM files . SQLite integrates those caches as part of database close.

This having been said, Has consideration been given to:

  1. After maintenance runs
  2. Stop PMS
  3. Snapshot your databases (this will also capture the newly created backup)
  4. Start PMS

Now there’s no need to monitor PMS in realtime

Plex isn’t HA so if my home loses power or my server crashes I will have data loss/corruption. This already has happened many times (granted most of the loss can be rebuilt/ignored but it still happens.)

I’ve had this particular server instance and its data since 2017. I’ve had a lot of hardware crashes, database rebuilds. In the past 2-3 years I managed to stabilize the hardware so I don’t get many failures.

I initially implemented the database on a remote lvm store that I snapshotted regularly. I then moved the database to a local ssd (along with all metadata).

Now due to increasing database size and increasing number of media items since I’m a hoarder ( i don’t really delete stuff) I get concurrency issues.

Since I can’t horizontally scale, I’m doing it vertically and since moving from a sata ssd to a pcie 3.0 4x ssd didn’t do much to help with performance, I’m moving towards a ramdisk now.

After applying @ChuckPa’s script from here I’ve noticed real improvements!
It seems rebuilding the database is needed every once in a while and it actually makes a difference.
I think that Plex should automatically do this during it’s maintenance!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.