Plex Media Server: Ongoing Network Issue

Erik,

You are not having a corrupt database … You have an extremely SLOW database. PMS cannot operate this way,.

Nov 16, 2018 15:34:49.638 [0x7f9fdc7ff700] WARN - SLOW QUERY: It took 250.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.638 [0x7f9fec3ff700] WARN - SLOW QUERY: It took 350.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.640 [0x7fa0067ff700] WARN - SLOW QUERY: It took 280.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.640 [0x7f9fe4bff700] WARN - SLOW QUERY: It took 320.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.642 [0x7f9ffb3fd700] WARN - SLOW QUERY: It took 330.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.644 [0x7f9fde7ff700] WARN - SLOW QUERY: It took 300.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.644 [0x7f9fe2bff700] WARN - SLOW QUERY: It took 340.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.644 [0x7fa005ffe700] WARN - SLOW QUERY: It took 340.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.650 [0x7fa0007ff700] WARN - SLOW QUERY: It took 410.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.650 [0x7f9fe5bff700] WARN - SLOW QUERY: It took 350.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.652 [0x7f9fe07ff700] WARN - SLOW QUERY: It took 330.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.653 [0x7f9fff7fd700] WARN - SLOW QUERY: It took 400.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.653 [0x7f9ffa3ff700] WARN - SLOW QUERY: It took 310.000000 ms to retrieve 1 items.
Nov 16, 2018 15:34:49.668 [0x7f9feebff700] WARN - SLOW QUERY: It took 330.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.669 [0x7f9fe7fff700] WARN - SLOW QUERY: It took 470.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.680 [0x7f9fc8ff6700] WARN - SLOW QUERY: It took 360.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.692 [0x7f9fd57ff700] WARN - SLOW QUERY: It took 220.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.695 [0x7fa0077ff700] WARN - SLOW QUERY: It took 290.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.708 [0x7f9fd67ff700] WARN - SLOW QUERY: It took 390.000000 ms to retrieve 14 items.

Almost the entire log is filled with these messages.

Do you have command line access to the server?

I do have command line access to the server. Is there something you want me to run?

I would like you to get the server database manually optimized.

Here are the instructions once signed on.

We did this not too long ago… should it be a frequent task? If so, should I script something to run every day or every 12 hours to optimize while pms runs?

Are you adding a lot of media ?

On a weekly basis, couple hundred gbs

That is the problem. You need to optimize much more frequently.

Do not add more than a few hundred (1000 maximum) at a time without optimizing.

How often would you recommend a scripted process to optimize? Would more often then needed cause me issues? (Every hour or two hours?)

If your media additions are automated, can you set a counter to trigger it?

The database becomes fragmented based on items-added count. The size of the files added does not matter.

Music albums are the hardest because each album has multiple tracks. Each track is indexed.

I’m not dealing with Music (when I do, I manually locate/add so I can handle that optimization manually).

I can see about setting a counter but I’m not that familiar with Linux. I’ll dig in tonight and see what I can come up with.

Do you know of any good tools to help me with this in Ubuntu 18.04? The media location is mounted in Ubuntu using ext4.

There are no real tools per se.

You will need a shell script which does the following:

  1. Counts all the files in your media directories
  2. Subtracts that total from the previous number of files in those same directories (saved in a file)
  3. When the difference is greater than 500 (as example), invoke manual optimization
  4. Update the counter saved in the file for the next scan
  5. Sleep until the next scan.

Working through it now…

Out of curiousity, what’s the point of this line:

rm com.plexapp.plugins.library.db-???

It doesn’t work. Should it maybe be “*” instead of “???”?

Thanks for all your help!

Erik,
Sorry for delay… (dinner here).

Please use the curl method only until I have the SQL method edited

The below code works:

# Stop Service for Work to be Performed
service plexmediaserver stop

# Change to Databases Directory
cd /var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases

# Backup old db and remove any previous backups by PMS App
mv com.plexapp.plugins.library.db com.plexapp.plugins.library.prev.db
rm com.plexapp.plugins.library.db-*

# Extract data from old db to dbdump
sqlite3 com.plexapp.plugins.library.prev.db .dump > dbdump.sql

# Import db dump to new db
sqlite3 com.plexapp.plugins.library.db < dbdump.sql

# Change ownership to match plexmediaserver.service
chown plex:plex com.plexapp.plugins.library.db

# Start Service
service plexmediaserver start

how big is your library database?

that’s your problem.

if the built-in scheduled maintenance isn’t enough, what you really need to do is increase the IOPS on your plex drive. and/or run plex on bare metal.

https://app.plex.tv/desktop# > settings > scheduled tasks

if you don’t already have one, SSD’s work well. If your plex not on SSD, your drives probably need some heavy duty defragging.

if your already on ssd, then you need to consider a larger and/or better performing ssd, or a pci-e based ssd.

something to reduce io latency.

keep the transcode folder off the ssd (I use transcode to ram via tmpfs).

you should also ‘empty trash’ and ‘clean bundles’ on a regular basis with the db optimization.

In any case, and especially for large and rapidly expanding libraries, you need IO IO IO.

I can script the optimization to run more frequently (what I’m working on currently). The script ChuckPA gave me (and also above) fixes the issue immediately. I think I would have this issue even on bare metal because a large amount of data is being added regardless of the subsystem it’s on.

Granted, bare metal would eliminate any latency to the drive for read/write, I have a 4 gigabit (approx 500 megabytes/second) connection to my drive dedicated to Plex. SSD would be in the best circumstance 712 megabytes/second read speed. So the current setup would definitely be sufficient and I know many people using virtual. Issue is the size of the library (I delete nothing) and amount of data being added.

I have the trash emptied each time my library scans (every 30 minutes) and I need to figure out cleaning the bundles. ( @ChuckPa is there a way to do that via script? )

At this stage, I’ll setup a good maintenance plan with scripts and see what happens! Ideally, I solve this as someday, someone else may run into the same issue and replacing what the OS is hosted on can be fairly costly!

Thanks!

the raw throughput or bandwidth to the drive, is much less important than the latency.

another good thread @ theoretically speaking, how large of a library or libraries too large for plex to handle?

@erik.zeitz above you indicate running ubuntu, open an HTOP window on the vm that plex runs on and monitor the threads.

watch for red D’s in the the status (S) column, that means that vm does not have enough IO. an occasional few short lived D’s are not a big deal, but if there are a lot or they are showing D for long periods of time, then you are definitely seeing insufficient IO.

I’ll definitely monitor but i’ve used HTOP for weeks and never seen a red D lol

Ok thats good if true.

and to correct myself, it is the ‘state’ column (not status).

not sure if its visible by default or not (probably is).

image

another indicator of high IO load is the load average. if the load average is high, especially while cpu is low, then you are most likely waiting on disk io.

Here is my view:

image

A bunch of green R’s and a whole bunch of S’s but no red D.

Average load is steady at around 15. CPU’s aren’t being taxed. I assumed average load is bad if your upwards of 80 (or approaching 100) but never really looked into it.

I also ran a test using ATTO Disk Benchmark and had minimal IO loss over my iSCSI network. So far so good!