Library.db size more than doubled in latest version

My apologies for the extreme delay on follow up!

To answer your question:

  • Docker vDisk Size: 200GB
  • Space in use: 4.38GB (3%)
  • My Docker vDisk lives on a 1TB SSD Cache drive

I did clear out 3 of the 4 .db and .blob.db backups files prior to using the ‘Optimize Database’ function in WebTools-NG to create as much space a possible since my .db and all the .db backups were at 33.7GB each.

2 Likes

I’m on version 1.41.8.9834 and have been updating regularly, and my DB file was still 20.3 GB, with backups of the same size going back to June 5. I don’t know if there was something wrong with the schedule tasks (I didn’t touch anything there so it would have been at default settings) but I expected the DB size to be reduce by now with the fix implemented in 1.41.8.

Luckily I was able to run the DELETE FROM statistics_bandwidth WHERE account_id is NULL; and then VACUUM; and that was able to get the sized back down to 160MB. I let it run overnight so I wasn’t too worried about the running time, but it’s good to know at least that things should be back to “normal” now.

1 Like

Unfortunately, it’s even slower on a MUCH more powerful PC. I don’t understand, and this will keep me off the Beta release path forever. Plex has been unusable for a week and counting.

Bottom line, is this now fixed in the latest release where I do not have to do anything for this to be resolved or should I keep waiting.

x.8 is resolved and even has an automatic script that fixes it if previously damaged.
Avoid x.7

Short Answer:

  1. Run PMS v1.41.8.9834.
  2. Enable Optimize Database Every Week in Settings → Scheduled Tasks.
  3. Database bloat will be removed the next time the optimization process runs.

Long Answer:

The problem causing database bloat existed in 1.41.7 beta releases.

The 1.41.7.9799 and 1.41.7.9823 public releases do not have the problem that causes database growth (release notes). Neither will remove the bloat from an affected database.

Plex Media Server v1.41.8.9834, current public release, will remove the bloat from an affected database.

The “de-bloating” occurs when the database is optimized as a scheduled task. Optimization happens once per week. Optimize database every week must be enabled in Settings → Scheduled Tasks.

If desired, you can initiate the database optimization manually using WebTools-NG. See this post for additional info.

You can also initiate the process via a POST or curl command. See this post for additional information.

2 Likes

Unless I’m mistaken, both things have been true. Migrations necessary for operation have occurred at startup, and I can’t remember what, but I swear there have been various optimizations and fixups and whatnot that have only been done during those scheduled Optimize database every week tasks.

I wish there was a button to trigger them in the UI.

It’s been a week and the -wal file is about 1/3 the way to the 200GB total database size. Somehow, it’s even slower when moved onto a much more powerful machine.

I guess it’s “scrap it and start over” on the database. I don’t have 3 weeks to devote to downtime because the database repair is hideously slow.

What a mess.

1 Like

Which method are you using?

You can try combining both the create/drop table and journal mode methods.

PRAGMA journal_mode=OFF;
PRAGMA synchronous=OFF;
CREATE TABLE statistics_bandwidth_new AS SELECT * FROM statistics_bandwidth WHERE account_id IS NOT NULL;
DROP TABLE statistics_bandwidth;
ALTER TABLE statistics_bandwidth_new RENAME TO statistics_bandwidth;
CREATE INDEX index_statistics_bandwidth_on_at ON statistics_bandwidth ('at');
CREATE INDEX 'index_statistics_bandwidth_on_account_id_and_timespan_and_at' ON 'statistics_bandwidth' ('account_id', 'timespan', 'at');
VACUUM;
PRAGMA journal_mode=WAL;
1 Like

Working out a final form of this.

  1. 33GB DB
  2. i9 CPU
  3. NVMe SSD
total 32518992
drwxr-xr-x 2 chuck chuck        4096 Jun 18 16:00 ./
drwxr-xr-x 3 chuck chuck          66 May 30 10:24 ../
-rw-r--r-- 1 chuck chuck   178556928 May 17 16:03 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 chuck chuck 33120714752 Jun 18 16:00 com.plexapp.plugins.library.db
-rw-r--r-- 1 chuck chuck        8288 May 30 19:40 DBRepair.log
-rwxr-xr-x 1 chuck chuck       72863 May 30 16:08 DBRepair.sh*
-rwxr-xr-x 1 chuck chuck       72942 May 30 15:08 dbr.sh*
-rwxr-xr-x 1 chuck chuck         656 Jun  2 01:00 deflate*
-rwxr-xr-x 1 chuck chuck         572 Jun 18 15:59 deflate2*
-rwxr-xr-x 1 chuck chuck         543 Jun  2 12:12 do-deflate*
-rw-r--r-- 1 chuck chuck           0 Jun 18 15:59 Tommy.db
[chuck@lizum databases.2025.05.17.2067]$ vi deflate2
[chuck@lizum databases.2025.05.17.2068]$ time ./deflate2

real	0m0.830s
user	0m0.609s
sys	0m0.147s
[chuck@lizum databases.2025.05.17.2069]$

Resultant, correct, DB is:

-rw-r--r-- 1 chuck chuck   216965120 Jun 18 16:38 Tommy.db

I can write script to make the new DB active. Is it wanted ?


EDIT:

[chuck@lizum databases.2025.05.17.2082]$ ./NewDeflate 
==== Current listing
total 32G
drwxr-xr-x 2 chuck chuck 4.0K Jun 18 17:00 .
drwxr-xr-x 3 chuck chuck   66 May 30 10:24 ..
-rw-r--r-- 1 chuck chuck 171M May 17 16:03 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 chuck chuck  31G Jun 18 16:59 com.plexapp.plugins.library.db
-rw-r--r-- 1 chuck chuck 8.1K May 30 19:40 DBRepair.log
-rwxr-xr-x 1 chuck chuck  72K May 30 16:08 DBRepair.sh
-rwxr-xr-x 1 chuck chuck  72K May 30 15:08 dbr.sh
-rwxr-xr-x 1 chuck chuck  656 Jun  2 01:00 deflate
-rwxr-xr-x 1 chuck chuck  581 Jun 18 16:38 deflate2
-rwxr-xr-x 1 chuck chuck  543 Jun  2 12:12 do-deflate
-rwxr-xr-x 1 chuck chuck  821 Jun 18 17:00 NewDeflate
==== Vacuuming into new DB

real	0m0.784s
user	0m0.638s
sys	0m0.123s
==== Deflated DB listing
-rw-r--r-- 1 chuck chuck 207M Jun 18 17:01 new.com.plexapp.plugins.library.db
==== Remove -wal and -shm before renaming 'new.com.plexapp.plugins.library.db' -> 'com.plexapp.plugins.library.db'
[chuck@lizum databases.2025.05.17.2083]$ 
2 Likes

I suspect at least @DemandredG might be interested here?

That would be amazing. I’m on Windows as my second machine, but it’s far faster (9950x3d, NVMe SSD, etc.) than my server (intel Atom, 5400rpm HD array) when doing anything except this task.

I had been using the Optimize Database tool within PMS, butlered over through WebTools.

I don’t believe the butler task disables WAL mode. Running the SQL commands manually with WAL disabled will speed it up a lot.

1 Like

Apologies for delay – Internet outage due to storm.

I ran the same batch script, which is the unification of all we’ve learned so far, on my DS418j (Synology, DSM 7, ARMv8, 1GB RAM, 2x HDD volumes)

The is the console output.

chuck@ds418j:/volume2/chuck/Deflate/databases.2025.05.17$ time ./Deflate 
==== Current listing
total 32G
drwx------ 2 chuck users 4.0K Jun 18 18:29 .
drwxrwxrwx 3 admin users 4.0K Jun 18 18:26 ..
-rw------- 1 chuck users 171M May 17 16:03 com.plexapp.plugins.library.blobs.db
-rw------- 1 chuck users  31G May 17 16:10 com.plexapp.plugins.library.db
-rwx------ 1 chuck users  736 Jun 18 18:29 Deflate
==== Vacuuming into new DB

real	32m36.251s
user	3m46.370s
sys	2m59.900s
==== Deflated DB listing
-rw------- 1 chuck users 207M Jun 18 19:02 new.com.plexapp.plugins.library.db

real	32m36.314s
user	3m46.400s
sys	2m59.900s
chuck@ds418j:/volume2/chuck/Deflate/databases.2025.05.17$

This is how it was done.

chuck@ds418j:/volume2/chuck/Deflate/databases.2025.05.17$ cat Deflate 
#!/bin/bash

PSQL="/var/packages/PlexMediaServer/target/Plex SQLite"

echo ==== Current listing
ls -lah

echo ==== Vacuuming into new DB
time "$PSQL" com.plexapp.plugins.library.db << EOT
  CREATE TABLE temp_bandwidth as select * from statistics_bandwidth where account_id not null;
  DROP TABLE statistics_bandwidth;
  ALTER TABLE temp_bandwidth RENAME to statistics_bandwidth;
  CREATE INDEX 'index_statistics_bandwidth_on_at' ON statistics_bandwidth ('at');
  CREATE INDEX 'index_statistics_bandwidth_on_account_id_and_timespan_and_at' ON 'statistics_bandwidth' ('account_id', 'timespan', 'at');
  VACUUM main into './new.com.plexapp.plugins.library.db';
EOT

echo ==== Deflated DB listing
ls -lah new.com.plexapp.plugins.library.db
chuck@ds418j:/volume2/chuck/Deflate/databases.2025.05.17$

How it works,

  1. Table creation for non-NULL records performed
  2. Bloated table dropped
  3. Table renamed
  4. New Indexes created.
  5. Vacuum into new DB
  • Table creation is where SQLite must slug through every record of the existing table (slowest task)
  • Shuffling the tables around and creating new indexes is trivial
  • Vacuuming into separate , new, DB saves on disk thrashing as shrinks the DB.

The only remaining task is to remove the old com.plexapp.plugins.library.db trio and then rename the “new” to take the place of the original.

2 Likes

Don’t need to set this back. PRAGMA synchronous is set per connection and not persisted in the DB file. Plex appears to set it to PRAGMA synchronous=NORMAL when it opens the DB file.

1 Like

Thank-you!

Since the latest version (1.41.8.9834) I have the problem that when the message “Sqlite3: Sleeping for 200ms to retry busy DB.” appears and Plex no longer responds, the entire Docker container freezes and Docker can no longer terminate the process cleanly. There were no problems with the previous version and none of this kind before that. Nothing has changed in the server setup.

I have already done the DB rebuild and repair…

“PMS ScannerPipe:2110193 blocked for more than 1228 seconds.”

@dennis_de

Please create a fresh thread, include your server’s DEBUG logs ZIP file,

and ping me

1 Like

(google translator)…

I’ve been following the thread, but my English is very limited, and I have a question… I’m on Windows 1.41.6.9685. Can I update now to 1.41.8.9834 without being affected by the problem mentioned here?

Thanks!!