@ChuckPa
Just for clarification, when you say be patient, what do you recommend I do now: wait for a new Windows version, or start the auto again tonight? Do I leave the big WAL file or delete it?
I request you;
- If you have the disk space, just in case it does finish, let it continue
- Wait until the Windows Powershell dev can look at what I sent him
(Itās very straight forward and should be easy to copy to powershell)
Once I hear from him, weāll know what time frame weāre looking at and then take action based on that.
Keep in mind that when journal_mode=off crash safety isnāt guaranteed. If the process is killed you canāt be sure the DB is in a healthy state. This is unusual for SQLite; itās crash-safe by default.
When thereās no journal, transaction rollback behavior is undefined and likely impossible. The docs make it sound scary.
Did you see this alternative method? It should only require as much additional space as the āgoodā data.
Library.db size more than doubled in latest version - #175 by raba1der
Library.db size more than doubled in latest version - #188 by hlve
Remember, DEFLATE operates on only ONE table in the DB.
āPlex SQLiteā has more internal options compiled into it than the default SQLite3.
That said, even in a worst-case scenario, the ONLY data which is potentially lost is from statistics_bandwidth where account_id is NULL .
Whether or not they get rolled back does not matter. We are going to delete them anyway.
Regarding the āgoodā data, How do you recover a DB when the filesystem is 100% full ? (yes, there are several of those.)
I could change this again but the amount of āgoodā data is still an unknown, true?
I could add logic to solve āDelete or create new?ā (which are two count(*) operations)
Is that the better choice ? Take more time upfront?
My understanding is that when journal_mode=off the worst-case scenario is a broken database file, not just data loss in a single table. Pages containing deleted data are rewritten and the B-tree is updated, so a crash or killed task could affect other tables.
Iām not really worried about a crash, except in the out-of-space scenario. Iām more worried about impatient people killing the process.
I agree, I donāt think thereās a free lunch. You canāt modify it in-place without modifying it in-place. Thereās no way to have safety+rollback and also no-additional-space.
I think statistics_bandwidth is usually size-constrained because it keeps more resolution for more recent data, and then consolidates older data. But ⦠yeah, if this is an emergency solution for people with NO space, just tell 'em to make a backup.
Please stick around after you reply. I have more questions ![]()
-
PMS 1.41.8 is supposed to fix this. Given 1.41.8 is now in beta, should this effort be abandoned?
-
If this is pursued, I can easily disable Control+C interrupt so the user is committed to the ride
Thoughts?
Agreed, I bet most people who are going to bump their heads against this have already done so. Nobody is going to seek out the previous builds that generated the extra rows.
Some people probably have big databases, but nothing broke, so 1.41.8 will clean it up.
I just wanted to warn about journal_mode=off for safety and incompatibility with transaction rollback, so nobody had misapprehensions.
You can trap SIGINT but you canāt prevent people from killing processes in another window or task manager. Dunno if itās worth the complexity!
I suspect that this is best left in Engineeringās hands.
I will cancel the Windows request and pull this from my code.
If folks really want it, the code is out there in the forums for both distros.
thanks for taking the time and being a sounding board.
soā¦just use auto for another 12 hours, or let PMS Optimize it?
Choice is yours. Iād let it run at this point. I know itās a PITA but it is the safest.
Whether it happens here or while PMS is running wonāt make much difference.
Sorry, but Iām not entirely clear on something I read earlier, as my English isnāt the best. I thought I saw a few hours ago that weād need to wait for a Windows update until the person responsible for updating dbrepair for this OS has it ready? Is this still the case, or has this changed?
Iām one of those users who doesnāt have enough disk space to run the auto process. If deflate ultimately wonāt be available for the Windows version, would you see migrating my server to another drive on my PC (where I have more space) as a viable solution? Iāve been trying for days to let PMS optimize the database without success.
Thanks!
See: DBRepair development - #726 by ChuckPa (PMS 1.41.8 is supposed to fix this. Given 1.41.8 is now in beta,)
and: DBRepair development - #728 by ChuckPa (I will cancel the Windows request and pull this from my code. If folks really want it, the code is out there in the forums for both distros.)
ALL:
Tying to help with he āstatistics_bandwidthā table bloat problem has been fraught with problems.
This is within Engineeringās wheelhouse to resolve and why I decided to remove it from my code.
Iāve done a lot of looking around and trial-error on this problem.
For those of you who have the skills and want to be proactive, I present this.
It preserves all the journaling as-is.
I recommend you make a backup anyway and then run normal DBRepair āautoā afterwards.
[chuck@lizum databases.2025.05.17.2032]$ cat do-deflate
"/usr/lib/plexmediaserver/Plex SQLite" com.plexapp.plugins.library.db << EOT
CREATE TABLE temp_bandwidth as select * from statistics_bandwidth where account_id not null;
DROP TABLE statistics_bandwidth;
ALTER TABLE temp_bandwidth RENAME to statistics_bandwidth;
CREATE INDEX 'index_statistics_bandwidth_on_at' ON statistics_bandwidth ('at');
CREATE INDEX 'index_statistics_bandwidth_on_account_id_and_timespan_and_at' ON 'statistics_bandwidth' ('account_id', 'timespan', 'at');
VACUUM;
EOT
[chuck@lizum databases.2025.05.17.2033]$
This is what it looks like here when it runs.
[chuck@lizum databases.2025.05.17.2029]$ ll
total 32518692
drwxr-xr-x 2 chuck chuck 174 Jun 2 12:14 ./
drwxr-xr-x 3 chuck chuck 66 May 30 10:24 ../
-rw-r--r-- 1 chuck chuck 178556928 May 17 16:03 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 chuck chuck 33120415744 May 17 16:10 com.plexapp.plugins.library.db
-rw-r--r-- 1 chuck chuck 8288 May 30 19:40 DBRepair.log
-rwxr-xr-x 1 chuck chuck 72863 May 30 16:08 DBRepair.sh*
-rwxr-xr-x 1 chuck chuck 72942 May 30 15:08 dbr.sh*
-rwxr-xr-x 1 chuck chuck 656 Jun 2 01:00 deflate*
-rwxr-xr-x 1 chuck chuck 543 Jun 2 12:12 do-deflate*
[chuck@lizum databases.2025.05.17.2030]$ time ./do-deflate
real 0m31.906s
user 0m17.659s
sys 0m9.910s
[chuck@lizum databases.2025.05.17.2031]$ ls -la
total 386416
drwxr-xr-x 2 chuck chuck 174 Jun 2 12:15 ./
drwxr-xr-x 3 chuck chuck 66 May 30 10:24 ../
-rw-r--r-- 1 chuck chuck 178556928 May 17 16:03 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 chuck chuck 216965120 Jun 2 12:15 com.plexapp.plugins.library.db
-rw-r--r-- 1 chuck chuck 8288 May 30 19:40 DBRepair.log
-rwxr-xr-x 1 chuck chuck 72863 May 30 16:08 DBRepair.sh*
-rwxr-xr-x 1 chuck chuck 72942 May 30 15:08 dbr.sh*
-rwxr-xr-x 1 chuck chuck 656 Jun 2 01:00 deflate*
-rwxr-xr-x 1 chuck chuck 543 Jun 2 12:12 do-deflate*
[chuck@lizum databases.2025.05.17.2032]$
As previously stated, with PMS 1.41.8 now in beta, this is the end of this issue for me.
Just want to confirm that there is nothing for me to be concerned with when on Ubuntu I ran 911 (If I understand correctly the issue was with Windows)
Ok to begin deflating the databases? (Y/N) ? Y
Removing - block 1
Removed 0 records.
Vacuuming DB to reclaim working space.
Initial Database size = 178 MB
Final Database size = 177 MB
I also ran Auto after 911:
.....
2025-05-30 20.38.52 - Auto - COMPLETED
2025-05-30 20.39.08 - Exit - Retain temp files.
2025-05-30 20.39.08 - Session end. Fri May 30 20:39:08 EDT 2025
2025-05-30 20.39.08 - ============================================================
2025-05-30 22.22.13 - ============================================================
2025-05-30 22.22.13 - Session start: Host is Ubuntu 24.04.2 LTS
2025-05-30 22.22.21 - Deflate - Report.
2025-05-30 22.22.21 - Deflate - Initial Database size = 178 MB
2025-05-30 22.22.21 - Deflate - Final Database size = 177 MB
2025-05-30 22.22.21 - Deflate - Records Removed = 0
2025-05-30 22.22.21 - Deflate - PASS.
2025-05-30 22.24.59 - Auto - START
2025-05-30 22.25.00 - Check - Check com.plexapp.plugins.library.db - PASS
2025-05-30 22.25.00 - Check - Check com.plexapp.plugins.library.blobs.db - PASS
2025-05-30 22.25.00 - Check - PASS
2025-05-30 22.25.39 - Repair - Export databases - PASS
2025-05-30 22.25.47 - Repair - Import - PASS
2025-05-30 22.25.48 - Repair - Verify main database - PASS (Size: 169MB/169MB).
2025-05-30 22.25.48 - Repair - Verify blobs database - PASS (Size: 254MB/254MB).
2025-05-30 22.25.48 - Making repaired databases active
2025-05-30 22.25.48 - Repair - Move files - PASS
2025-05-30 22.25.48 - Repair - PASS
2025-05-30 22.25.48 - Repair - PASS
2025-05-30 22.25.48 - Reindex - MakeBackup com.plexapp.plugins.library.db - PASS
2025-05-30 22.25.49 - Reindex - MakeBackup com.plexapp.plugins.library.blobs.db - PASS
2025-05-30 22.25.49 - Reindex - MakeBackup - PASS
2025-05-30 22.25.50 - Reindex - Reindex: com.plexapp.plugins.library.db - PASS
2025-05-30 22.25.50 - Reindex - Reindex: com.plexapp.plugins.library.blobs.db - PASS
2025-05-30 22.25.50 - Reindex - PASS
2025-05-30 22.25.50 - Reindex - PASS
2025-05-30 22.25.50 - Auto - COMPLETED
2025-05-30 22.26.17 - Exit - Retain temp files
Correct, nothing to be concerned. Youāre already in the best position.
You no longer need any ādeflateā
Just wanted to say THANKS! for @ChuckPa and compatriots for being so accessible and helpful with this tool and in particular for the recent DB hiccup. Much appreciated. ![]()
How big do these WAL/SHM db files get? Been running PMS Optimize since last night. have no idea if Iām getting close since the loading circle in Plex has been at 7-oclock the entire time
The WAL (Write Ahead Lookup) file will get to be about the size of its corresponding DB file (presuming it didnāt previously exist)
The SHM is the shared memory buffer are for the PMS processes.
What are you currently doing ? PMS āOptimize DBā or something else ?
Right now PMS Opt. database
file is 51gb now, database is 56 ![]()
update: got shy of the db size and then disk activity kicked up a bunch, by 2am it was finally done, or 24hrs of running optimization. Glad thatās over.