Library.db size more than doubled in latest version

I don’t understand why some of you guys don’t have backups. It’s enabled by default on PMS, is it not?

If anyone have issue deleting null ID records from statistics_bandwidth table using simple “delete from statistics_bandwidth where account_id is NULL” SQL query, you can try deleting in batches.
My database size was grown from around 400MB to 58GB. The delete query was going on overnight (around 6–7 hours). Then with the help of ChatGPT, I made a bash script which delete 1M records (you can adjust according to how long it takes to complete each batch deletion) at a time. It took around an hour.

Here is the bash script I used if anyone interested.

#!/bin/bash

SQLITE_BIN="/usr/lib/plexmediaserver/Plex SQLite"
DB_PATH="/debugdb/com.plexapp.plugins.library.db"
TABLE="statistics_bandwidth"
BATCH_SIZE=1000000

echo "Starting batch deletion from $TABLE in $DB_PATH..."

while true; do
  ROWS_DELETED=$("$SQLITE_BIN" "$DB_PATH" <<EOF
DELETE FROM $TABLE
WHERE rowid IN (
  SELECT rowid FROM $TABLE
  WHERE account_id IS NULL
  LIMIT $BATCH_SIZE
);
SELECT changes();
EOF
)

  echo "Deleted $ROWS_DELETED rows..."

  if [ "$ROWS_DELETED" -lt "$BATCH_SIZE" ]; then
    echo "Deletion complete."
    break
  fi
done

Warning: Make sure to have an original copy of the database for backup purpose.

2 Likes

I did a change to the. BAT file. (I’m not a windows guy but that should work)

1 Like

I had the automatic backups but the size increase had been ongoing silently in the background for over a week, so the only backup that was not affected was about 10 days old and so I preferred not to lose the database changes for those days if possible (I know it wasn’t a big deal) so I chose to try to just remove the affected rows and leave everything intact first , knowing that if that didn’t work, I could always go ahead and restore the backup.

1 Like

I’m impacted by the DB Null Bloat Bug, but this has also made me pose another question:

Is 4 backups of the DB truly needed? Is there way to reduce this number down?

It’s not that people don’t have backups. It’s just usually better to avoid data loss than downtime.

I ran this fix after upgrading to 9799 and my db shrank to 1gb. It’s back up to 23gb. Does this bug still exist in the latest version? Running this SQLlite command above no longer reduces its size either. I even tried making a copy of it and deleting all stats bw entries, not just where user is null. Didn’t reduce size at all.

The issue I’m finding is that any of these VACUUM or db operations seem to require the same amount or more of disk space that the DB is taking up. Unfortunately, my DB has ballooned to 151.6GB and as such has filled the 256GB drive of the Mac Mini it’s running on (Content is connected via network volume).

Hoping the DB migration that needs to happen in future versions takes this into account and doesn’t just keep the drive full because it decided to take up space with billions of rows of rubbish.

1 Like

This might be able to prevent that: Library.db size more than doubled in latest version - #144 by toxicbubble
But keep a copy of the databases folder, because these commands effectively temporarily remove safety nets, which would prevent data loss in the event of unforeseen circumstances or bugs.

1 Like

This worked really well, thanks! I had to copy the spare to an external drive to resolve, but once there it was smooth.

So I finally got my db to reduce. Was at 30gb. Deleting the stats and vacuuming wouldn’t reduce the size. I ended up doing the SQLlite dump command the recreating the db from the dumb file and it went back to 1gb.

1 Like

All my backups where effected trying multiple fixes dbrepair did not work after 20hours it finished with a fail. Trying this command now

delete from statistics_bandwidth where account_id is NULL;

Running for 6h on a 30gb database. Thanks for everyone trying to fix this. Hope plex will get it sorted
update it worked from 30gb to 500mb was almost 12h running on a older xeon server

1 Like

The dbrepair tool is super nifty! Unfortunately, I think I didn’t catch the growth issue quickly enough. Had my db grow from 688 MB to over 155 GB. Tried running the tool, letting it go for 16+ hrs on a very beefy machine and finally just killed it. Rolled back to a backup from 12 days ago that was sub 1 Gb, ran the dbrepair tool and all is back to normal now. :slight_smile:

1 Like

The procedure where you create a new table seems to work a lot faster. Not sure which procedure is on Chuck’s DBRepair, if that’s what you used.

1 Like

my db file grew to 31GB. Attempting to follow the suggestions here, 3+ hours later there was still no measurable progress.

Here is what actually worked: (~15min to finish for me)
(changing the journal_mode from wal to delete, running the delete, vacuuming, then changing it back.)

"C:\Program Files\Plex\Plex Media Server\Plex SQLite.exe" "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\com.plexapp.plugins.library.db"

pragma journal_mode=delete;
DELETE FROM statistics_bandwidth WHERE account_id is NULL;
VACUUM;
pragma journal_mode=wal;
.quit
3 Likes

I ran some tests comparing methods of deleting the statistics data from a bloated database.

This is definitely not scientific, but hopefully provides an idea of the speed of each option.

Faster storage will help any method. Faster CPU cores will help when using the “delete from statistics_bandwidth…” command as it is single threaded.

Setup:

  • db: ~215 MB db, bloated to ~33 GB (a backup copy from my server).
    statistics_bandwidth size = 571483322
  • PC: Win 10 (4790K) with m.2 PCIe 3.0 x4 SSD (PMS server in use, so used my desktop instead).
  • PMS: 1.41.7.9799 (although I doubt it matters).

Results

  1. Create/Drop Table as outlined by @hlve (here)
    Four threads running at ~20% ea., SSD at 90 - 100% utilization.
    Time: 957 sec (15:57)

  2. Change journal_mode as outlined by @twig123 (here)
    One thread at ~85%, SSD at 70-80% utilization.
    Time: 2254 sec (37:35)

  3. Delete from statistics_bandwidth without changing journal mode
    One thread at ~85%, SSD at ~40% utilization.
    3627 sec (1:01:13)

statistics_bandwidth = 24513 for all three methods.


Additional Details

Below are the details for each test.

.timer ON shows the time for each SQLite command.
real = elapsed time. See this SQLite Forum post for more info.

#1 Create/Drop Table
c:\>dir "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\*.db"
05/17/2025  15:10    33,120,415,744 com.plexapp.plugins.library.db

c:\>"C:\Program Files\Plex\Plex Media Server\Plex SQLite.exe" "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\com.plexapp.plugins.library.db"
SQLite version 3.39.4 2022-09-29 15:55:41
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> CREATE TABLE statistics_bandwidth_new AS SELECT * FROM statistics_bandwidth WHERE account_id IS NOT NULL;
Run Time: real 250.699 user 29.546875 sys 60.203125
sqlite> DROP TABLE statistics_bandwidth;
Run Time: real 706.498 user 34.828125 sys 156.484375
sqlite> ALTER TABLE statistics_bandwidth_new RENAME TO statistics_bandwidth;
Run Time: real 0.011 user 0.015625 sys 0.000000
sqlite> CREATE INDEX index_statistics_bandwidth_on_at ON statistics_bandwidth ('at');
Run Time: real 0.008 user 0.000000 sys 0.000000
sqlite> CREATE INDEX 'index_statistics_bandwidth_on_account_id_and_timespan_and_at' ON 'statistics_bandwidth' ('account_id', 'timespan', 'at');
Run Time: real 0.013 user 0.015625 sys 0.000000
sqlite> vacuum;
Run Time: real 3.475 user 0.562500 sys 2.312500
sqlite> .quit

c:\>dir "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\*.db"
05/26/2025  18:57       216,965,120 com.plexapp.plugins.library.db
c:\>
#2 Change Journal Mode
c:\>dir "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\*.db"
05/17/2025  15:10    33,120,415,744 com.plexapp.plugins.library.db

c:\>"C:\Program Files\Plex\Plex Media Server\Plex SQLite.exe" "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\com.plexapp.plugins.library.db"
SQLite version 3.39.4 2022-09-29 15:55:41
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> pragma journal_mode=delete;
delete
Run Time: real 0.027 user 0.000000 sys 0.000000
sqlite> DELETE FROM statistics_bandwidth WHERE account_id is NULL;
Run Time: real 2254.959 user 740.109375 sys 406.968750
sqlite> VACUUM;
Run Time: real 3.548 user 0.703125 sys 2.250000
sqlite> pragma journal_mode=wal;
wal
sqlite> .quit

c:\>dir "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\*.db"
05/26/2025  22:22       216,948,736 com.plexapp.plugins.library.db
#3 DELETE FROM statistics_bandwidth
c:\>dir "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\*.db"
05/17/2025  15:10    33,120,415,744 com.plexapp.plugins.library.db

c:\>"C:\Program Files\Plex\Plex Media Server\Plex SQLite.exe" "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\com.plexapp.plugins.library.db"
SQLite version 3.39.4 2022-09-29 15:55:41
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> delete from statistics_bandwidth where account_id is NULL;
Run Time: real 3627.571 user 2037.015625 sys 576.750000
sqlite> vacuum;
Run Time: real 3.727 user 0.703125 sys 1.875000
sqlite> .quit

c:\>dir "%LOCALAPPDATA%\Plex Media Server\Plug-in Support\Databases\*.db"
05/26/2025  20:21       216,948,736 com.plexapp.plugins.library.db
c:\>
10 Likes

Is your PMS data directory on spinning disks?

No, its on an nvme ssd, thus part of my surprise

edited

1 Like

This…