You are not having a corrupt database … You have an extremely SLOW database. PMS cannot operate this way,.
Nov 16, 2018 15:34:49.638 [0x7f9fdc7ff700] WARN - SLOW QUERY: It took 250.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.638 [0x7f9fec3ff700] WARN - SLOW QUERY: It took 350.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.640 [0x7fa0067ff700] WARN - SLOW QUERY: It took 280.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.640 [0x7f9fe4bff700] WARN - SLOW QUERY: It took 320.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.642 [0x7f9ffb3fd700] WARN - SLOW QUERY: It took 330.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.644 [0x7f9fde7ff700] WARN - SLOW QUERY: It took 300.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.644 [0x7f9fe2bff700] WARN - SLOW QUERY: It took 340.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.644 [0x7fa005ffe700] WARN - SLOW QUERY: It took 340.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.650 [0x7fa0007ff700] WARN - SLOW QUERY: It took 410.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.650 [0x7f9fe5bff700] WARN - SLOW QUERY: It took 350.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.652 [0x7f9fe07ff700] WARN - SLOW QUERY: It took 330.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.653 [0x7f9fff7fd700] WARN - SLOW QUERY: It took 400.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.653 [0x7f9ffa3ff700] WARN - SLOW QUERY: It took 310.000000 ms to retrieve 1 items.
Nov 16, 2018 15:34:49.668 [0x7f9feebff700] WARN - SLOW QUERY: It took 330.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.669 [0x7f9fe7fff700] WARN - SLOW QUERY: It took 470.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.680 [0x7f9fc8ff6700] WARN - SLOW QUERY: It took 360.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.692 [0x7f9fd57ff700] WARN - SLOW QUERY: It took 220.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.695 [0x7fa0077ff700] WARN - SLOW QUERY: It took 290.000000 ms to retrieve 14 items.
Nov 16, 2018 15:34:49.708 [0x7f9fd67ff700] WARN - SLOW QUERY: It took 390.000000 ms to retrieve 14 items.
Almost the entire log is filled with these messages.
We did this not too long ago… should it be a frequent task? If so, should I script something to run every day or every 12 hours to optimize while pms runs?
# Stop Service for Work to be Performed
service plexmediaserver stop
# Change to Databases Directory
cd /var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases
# Backup old db and remove any previous backups by PMS App
mv com.plexapp.plugins.library.db com.plexapp.plugins.library.prev.db
rm com.plexapp.plugins.library.db-*
# Extract data from old db to dbdump
sqlite3 com.plexapp.plugins.library.prev.db .dump > dbdump.sql
# Import db dump to new db
sqlite3 com.plexapp.plugins.library.db < dbdump.sql
# Change ownership to match plexmediaserver.service
chown plex:plex com.plexapp.plugins.library.db
# Start Service
service plexmediaserver start
I can script the optimization to run more frequently (what I’m working on currently). The script ChuckPA gave me (and also above) fixes the issue immediately. I think I would have this issue even on bare metal because a large amount of data is being added regardless of the subsystem it’s on.
Granted, bare metal would eliminate any latency to the drive for read/write, I have a 4 gigabit (approx 500 megabytes/second) connection to my drive dedicated to Plex. SSD would be in the best circumstance 712 megabytes/second read speed. So the current setup would definitely be sufficient and I know many people using virtual. Issue is the size of the library (I delete nothing) and amount of data being added.
I have the trash emptied each time my library scans (every 30 minutes) and I need to figure out cleaning the bundles. ( @ChuckPa is there a way to do that via script? )
At this stage, I’ll setup a good maintenance plan with scripts and see what happens! Ideally, I solve this as someday, someone else may run into the same issue and replacing what the OS is hosted on can be fairly costly!
@erik.zeitz above you indicate running ubuntu, open an HTOP window on the vm that plex runs on and monitor the threads.
watch for red D’s in the the status (S) column, that means that vm does not have enough IO. an occasional few short lived D’s are not a big deal, but if there are a lot or they are showing D for long periods of time, then you are definitely seeing insufficient IO.
and to correct myself, it is the ‘state’ column (not status).
not sure if its visible by default or not (probably is).
another indicator of high IO load is the load average. if the load average is high, especially while cpu is low, then you are most likely waiting on disk io.
A bunch of green R’s and a whole bunch of S’s but no red D.
Average load is steady at around 15. CPU’s aren’t being taxed. I assumed average load is bad if your upwards of 80 (or approaching 100) but never really looked into it.
I also ran a test using ATTO Disk Benchmark and had minimal IO loss over my iSCSI network. So far so good!