I’m hoping it’s ok to post this… I wanted to start a dialog with any other users who have a massive Plex library, to find out things people may have done to help keep Plex running and optimized, preventing crashes, database errors and/or malfunctions, etc.
I run my Plex server on a pretty beefy Hetzner, AMD Ryzen 9 5950X 16-Core, 128GB RAM, running Ubuntu 20.04. The server has 2 x 3.84 TB NVMe SSD drives, configured in RAID 0.
Even with what I think is a really fast disk and CPU, I am seeing a lot of Plex database issues. For one, I see almost constant Sqlite3: Sleeping for 200ms to retry busy DB. errors. A quick search indicates the solution for this is typically to make sure your database is being housed on a fast SSD. But as indicated above, I’m already doing that.
Based on information learned from this thread: What are some housekeeping things I can do to speed up my Plex - #32 by hthighway I have increased the page size on the Plex database to 65536, which I believe is the maximum. I certainly have enough memory on this beast of a machine. I am also using the built-in Linux Ramdisk for the transcoder by setting the transcoder temporary directory to /dev/shm.
Unfortunately it seems the default_cache_size is deprecated in sqlite, so that is no longer an option. Has anyone found any other database tweak or optimization tricks to help huge databases perform better?
My current database file is 2.5GB, and the blobs database is 6.2GB. I don’t know if that’s just insanity, but like I said, this is also housing a really massive Plex library.
I ask this because there was a bug in PMS which impacted some users.
The result was a massive database which, in spite of all efforts, could not be reduced in size manually.
There is a test to see if the database is compromised but first it’s important to know how many media items are indexed.
Thanks for your reply, @ChuckPa. I was actually following that thread (if it’s the one I’m thinking of) and it didn’t apply to me. I don’t have any items that have a large number of extras attached to it.
I just added up the total number of movies and episodes in my Plex library, and I came up with 352,990 currently.
I’m happy to run the test, if you think it might apply to me. Thanks!
If there are more than 100 extras for any particular item, of those first 50 with the highest count, then you have the indications of the DB bloating problem.
Edit the database directory variable if needed.
[chuck@lizum bin.2045]$ cat PlexDBscan
#!/bin/bash
# Verify sqlite3 is installed
if [ "$(command -v sqlite3)" = "" ]; then
echo ERROR: \'sqlite3\' not found.
exit 1
fi
# EDIT this
DBDirectory='/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases'
# DO NOT EDIT BELOW THIS POINT
DB="$DBDirectory/com.plexapp.plugins.library.db"
# Verify we can read the DB
if [ ! -r "$DB" ]; then
echo ERROR: Cannot read database at \"$DB\"
exit 2
fi
for i in $( echo 'select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 50;'| sqlite3 "$DB" ) x
do
if [ $i != "x" ]; then
# if not EOL, parse (Count|Item)
Count=$(echo $i | awk -F'|' '{print $1}')
Item=$(echo $i | awk -F'|' '{print $2}')
if [ $Count -gt 100 ]; then
# Now get title from DB and print output
NameString="$(echo "select id, title, guid, refreshed_at from metadata_items Where id = '$Item';" | sqlite3 "$DB")"
Title="$(echo $NameString | awk -F'|' '{print $2}')"
Updated="$(echo $NameString | awk -F'|' '{print $4}')"
# Make pretty output
echo Item: $Item "|" Count: $Count "|" Title: \"$Title\" "|" Updated: $Updated
fi
fi
done
[chuck@lizum bin.2046]$
Ahh… I edited the script and changed the count from 100 to 50 (I know it said not to edit below that point, but I felt naughty lol) … and now it gives me an output.
That’s correct; the most extras you have for any indexed item is 77 extras
As you saw in the script, I set the trigger at 100.
You don’t have the database bug which plagued those users. You just have alotta stuff
What’s the dd I/O performance for the NVME SSD volume? 3+ GB/sec ?
I have a lowly Gen 3 card with 2x Samsung 970 EVO PRO 1TB NVMe on it.
This is typical performance with mdadm RAID 0.
[chuck@glockner Ubuntu 20.04.2007]$ dd if=Ubuntu\ 20.04.vmdk of=/dev/null bs=4M
6830+1 records in
6830+1 records out
28650897408 bytes (29 GB, 27 GiB) copied, 10.4956 s, 2.7 GB/s
[chuck@glockner Ubuntu 20.04.2008]$
It’s not much faster in raw read than my main array but it’s zero seek time so that’s a big boost for database and VM work (which is its purpose for me)
I wonder if there would be any net gain by performing a manual .dump from the DB and then recreating it by importing the .sql ascii file
In the past, it’s helped me more than Optimize. It might be something to consider here
We are most likely fighting against the number of DB records which can be retrieved within the 200 ms.
From your logs, what is the DB speed? Are you seeing “SLOW QUERY” ?
I’m seeing slow database access myself. Tons of waiting for busy database in the log file. I’m on a Sata based SSD so not quite the performance of your NVMe. Also seeing tons of other errors in the log I’m not sure if it’s related or not but it causes Plex to go unresponsive at times.
My last scan of my database (between Movies/TV/Music) was sitting at just above 700k files. Once i get this next batch of my DVDs scanned and loaded, it could be close to 800k. Though most of our watching is local and occasionally streaming via the mobile app; i find some lag when subtitles are burned in, either with file or from internet subtitle fetch. i’m on a thinned out Linux Mint 20 setup. (no office, no bling or anything that could use resources). When streaming the CPU is at 29-30% and memory is at a mere 12-14% usage.
Not sure setting the page size to 64K was the best choice. 4K or 8K to match disk i/o is usually better and most of the rows in the database are pretty small so you are reading a lot of data for no reason.
The dump and load will do more to “defrag” the database if that is the root issue but it could be something else causing contention.
I suggest considering a manual defrag because SQLite is a single-file database.
As such, records from all the tables are interwoven in the file (when reading it from line 1 to the last line like a text file).
Being interwoven introduces linking blocks in the file. It takes longer to (actively) read 100 records with 99 links than it does to read 100 sequential records because sqlite has to do an internal ‘seek’ to get to the next record of the particular table as it completes the query.
I would very much like to see some log files in order to quantify what is being seen as well as know how big these DB files actually are.
Hi Chuck, I tried PM’ing you my logs, but it said you are not accepting private messages. Is it safe to upload logs here? Or is there a way for me to PM them to you?
I also routinely do the manual process of dumping and rebuilding the database. This is what I typically do:
You need a bigger file than that. It should run for several seconds at least.
Your results look like memory buffering is producing an artificial result.
Find the md device and read raw from it:
[chuck@glockner ~.1998]$ sudo dd if=/dev/md2 of=/dev/null bs=4M count=10000
10000+0 records in
10000+0 records out
41943040000 bytes (42 GB, 39 GiB) copied, 23.058 s, 1.8 GB/s
[chuck@glockner ~.1999]$
I have about half as many items as @Departed69 (188221 movies + episodes), and my hardware is significantly lower end as well (Xeon E3-1270 v6 @ 3.80GHz, 32GB RAM, 2x450GB Intel P3520 NVMe, Windows Server 2016). My database sizes:
My understanding is (and I’m sure @ChuckPa can confirm or deny) the blobs database holds things like intro detection and thumbnails. I don’t have thumbnails turned on, but I do have intro detection turned on. What about you, @VBB ? Are you using either of these features?
Other than that, since you’re on Windows, in some respects this is apples and oranges. But here are the current sizes of my database files:
-rw-r--r-- 1 plex plex 6.2G Feb 11 18:26 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 plex plex 32K Feb 11 19:36 com.plexapp.plugins.library.blobs.db-shm
-rw-r--r-- 1 plex plex 4.1M Feb 11 19:36 com.plexapp.plugins.library.blobs.db-wal
-rw-r--r-- 1 plex plex 2.4G Feb 11 21:23 com.plexapp.plugins.library.db
-rw-r--r-- 1 plex plex 0 Jan 10 06:04 com.plexapp.plugins.library.db-2022-01-10-tmp
-rw-r--r-- 1 plex plex 512 Jan 10 06:04 com.plexapp.plugins.library.db-2022-01-10-tmp-journal
-rw-r--r-- 1 plex plex 64K Feb 11 21:23 com.plexapp.plugins.library.db-shm
-rw-r--r-- 1 plex plex 45M Feb 11 21:23 com.plexapp.plugins.library.db-wal
No thumbnails, no intro and no chapter detection. Let’s see what Chuck has to say about the blob DB size difference. Your normal DB size seems very reasonable, considering the amount of items.
If that’s the case, it would make sense, since you must have an insane amount of episodes
Thumbnails are stored in the filesystem otherwise the DB would be even more of a problem. ( Each thumbnail can be up to 1 MB – with multiple chapters per file )
Intro marker information is stored somewhere. It might be in the blobs database but I’m not sure where it is.