DBRepair development

@ChuckPa
Just for clarification, when you say be patient, what do you recommend I do now: wait for a new Windows version, or start the auto again tonight? Do I leave the big WAL file or delete it?

I request you;

  1. If you have the disk space, just in case it does finish, let it continue
  2. Wait until the Windows Powershell dev can look at what I sent him
    (It’s very straight forward and should be easy to copy to powershell)

Once I hear from him, we’ll know what time frame we’re looking at and then take action based on that.

1 Like

Keep in mind that when journal_mode=off crash safety isn’t guaranteed. If the process is killed you can’t be sure the DB is in a healthy state. This is unusual for SQLite; it’s crash-safe by default.

When there’s no journal, transaction rollback behavior is undefined and likely impossible. The docs make it sound scary.

Did you see this alternative method? It should only require as much additional space as the ā€œgoodā€ data.

Library.db size more than doubled in latest version - #175 by raba1der

Library.db size more than doubled in latest version - #188 by hlve

@Volts

Remember, DEFLATE operates on only ONE table in the DB.

ā€œPlex SQLiteā€ has more internal options compiled into it than the default SQLite3.

That said, even in a worst-case scenario, the ONLY data which is potentially lost is from statistics_bandwidth where account_id is NULL .

Whether or not they get rolled back does not matter. We are going to delete them anyway.

Regarding the ā€˜good’ data, How do you recover a DB when the filesystem is 100% full ? (yes, there are several of those.)

I could change this again but the amount of ā€˜good’ data is still an unknown, true?

I could add logic to solve ā€œDelete or create new?ā€ (which are two count(*) operations)

Is that the better choice ? Take more time upfront?

1 Like

My understanding is that when journal_mode=off the worst-case scenario is a broken database file, not just data loss in a single table. Pages containing deleted data are rewritten and the B-tree is updated, so a crash or killed task could affect other tables.

I’m not really worried about a crash, except in the out-of-space scenario. I’m more worried about impatient people killing the process.

I agree, I don’t think there’s a free lunch. You can’t modify it in-place without modifying it in-place. There’s no way to have safety+rollback and also no-additional-space.

I think statistics_bandwidth is usually size-constrained because it keeps more resolution for more recent data, and then consolidates older data. But … yeah, if this is an emergency solution for people with NO space, just tell 'em to make a backup.

Please stick around after you reply. I have more questions :slight_smile:

  1. PMS 1.41.8 is supposed to fix this. Given 1.41.8 is now in beta, should this effort be abandoned?

  2. If this is pursued, I can easily disable Control+C interrupt so the user is committed to the ride

Thoughts?

1 Like

Agreed, I bet most people who are going to bump their heads against this have already done so. Nobody is going to seek out the previous builds that generated the extra rows.

Some people probably have big databases, but nothing broke, so 1.41.8 will clean it up.

I just wanted to warn about journal_mode=off for safety and incompatibility with transaction rollback, so nobody had misapprehensions.

You can trap SIGINT but you can’t prevent people from killing processes in another window or task manager. Dunno if it’s worth the complexity!

I suspect that this is best left in Engineering’s hands.

I will cancel the Windows request and pull this from my code.

If folks really want it, the code is out there in the forums for both distros.

2 Likes

thanks for taking the time and being a sounding board.

2 Likes

so…just use auto for another 12 hours, or let PMS Optimize it?

@mat_b

Choice is yours. I’d let it run at this point. I know it’s a PITA but it is the safest.

Whether it happens here or while PMS is running won’t make much difference.

1 Like

Sorry, but I’m not entirely clear on something I read earlier, as my English isn’t the best. I thought I saw a few hours ago that we’d need to wait for a Windows update until the person responsible for updating dbrepair for this OS has it ready? Is this still the case, or has this changed?

I’m one of those users who doesn’t have enough disk space to run the auto process. If deflate ultimately won’t be available for the Windows version, would you see migrating my server to another drive on my PC (where I have more space) as a viable solution? I’ve been trying for days to let PMS optimize the database without success.

Thanks!

2 Likes

See: DBRepair development - #726 by ChuckPa (PMS 1.41.8 is supposed to fix this. Given 1.41.8 is now in beta,)
and: DBRepair development - #728 by ChuckPa (I will cancel the Windows request and pull this from my code. If folks really want it, the code is out there in the forums for both distros.)

1 Like

ALL:

Tying to help with he ā€˜statistics_bandwidth’ table bloat problem has been fraught with problems.

This is within Engineering’s wheelhouse to resolve and why I decided to remove it from my code.

I’ve done a lot of looking around and trial-error on this problem.

For those of you who have the skills and want to be proactive, I present this.
It preserves all the journaling as-is.

I recommend you make a backup anyway and then run normal DBRepair ā€œautoā€ afterwards.

[chuck@lizum databases.2025.05.17.2032]$ cat do-deflate

"/usr/lib/plexmediaserver/Plex SQLite" com.plexapp.plugins.library.db << EOT
  CREATE TABLE temp_bandwidth as select * from statistics_bandwidth where account_id not null;
  DROP TABLE statistics_bandwidth;
  ALTER TABLE temp_bandwidth RENAME to statistics_bandwidth;
  CREATE INDEX 'index_statistics_bandwidth_on_at' ON statistics_bandwidth ('at');
  CREATE INDEX 'index_statistics_bandwidth_on_account_id_and_timespan_and_at' ON 'statistics_bandwidth' ('account_id', 'timespan', 'at');
  VACUUM;
EOT

[chuck@lizum databases.2025.05.17.2033]$

This is what it looks like here when it runs.

[chuck@lizum databases.2025.05.17.2029]$ ll
total 32518692
drwxr-xr-x 2 chuck chuck         174 Jun  2 12:14 ./
drwxr-xr-x 3 chuck chuck          66 May 30 10:24 ../
-rw-r--r-- 1 chuck chuck   178556928 May 17 16:03 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 chuck chuck 33120415744 May 17 16:10 com.plexapp.plugins.library.db
-rw-r--r-- 1 chuck chuck        8288 May 30 19:40 DBRepair.log
-rwxr-xr-x 1 chuck chuck       72863 May 30 16:08 DBRepair.sh*
-rwxr-xr-x 1 chuck chuck       72942 May 30 15:08 dbr.sh*
-rwxr-xr-x 1 chuck chuck         656 Jun  2 01:00 deflate*
-rwxr-xr-x 1 chuck chuck         543 Jun  2 12:12 do-deflate*
[chuck@lizum databases.2025.05.17.2030]$ time ./do-deflate 

real	0m31.906s
user	0m17.659s
sys	0m9.910s
[chuck@lizum databases.2025.05.17.2031]$ ls -la
total 386416
drwxr-xr-x 2 chuck chuck       174 Jun  2 12:15 ./
drwxr-xr-x 3 chuck chuck        66 May 30 10:24 ../
-rw-r--r-- 1 chuck chuck 178556928 May 17 16:03 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 chuck chuck 216965120 Jun  2 12:15 com.plexapp.plugins.library.db
-rw-r--r-- 1 chuck chuck      8288 May 30 19:40 DBRepair.log
-rwxr-xr-x 1 chuck chuck     72863 May 30 16:08 DBRepair.sh*
-rwxr-xr-x 1 chuck chuck     72942 May 30 15:08 dbr.sh*
-rwxr-xr-x 1 chuck chuck       656 Jun  2 01:00 deflate*
-rwxr-xr-x 1 chuck chuck       543 Jun  2 12:12 do-deflate*
[chuck@lizum databases.2025.05.17.2032]$

As previously stated, with PMS 1.41.8 now in beta, this is the end of this issue for me.

2 Likes

Just want to confirm that there is nothing for me to be concerned with when on Ubuntu I ran 911 (If I understand correctly the issue was with Windows)

Ok to begin deflating the databases? (Y/N) ? Y
Removing - block 1
Removed 0 records.
Vacuuming DB to reclaim working space.
Initial Database size = 178 MB
Final Database size   = 177 MB

I also ran Auto after 911:


.....
2025-05-30 20.38.52 - Auto    - COMPLETED
2025-05-30 20.39.08 - Exit    - Retain temp files.
2025-05-30 20.39.08 - Session end. Fri May 30 20:39:08 EDT 2025
2025-05-30 20.39.08 - ============================================================
2025-05-30 22.22.13 - ============================================================
2025-05-30 22.22.13 - Session start: Host is Ubuntu 24.04.2 LTS
2025-05-30 22.22.21 - Deflate - Report.
2025-05-30 22.22.21 - Deflate - Initial Database size = 178 MB
2025-05-30 22.22.21 - Deflate - Final Database size   = 177 MB
2025-05-30 22.22.21 - Deflate - Records Removed       = 0
2025-05-30 22.22.21 - Deflate - PASS.
2025-05-30 22.24.59 - Auto    - START
2025-05-30 22.25.00 - Check   - Check com.plexapp.plugins.library.db - PASS
2025-05-30 22.25.00 - Check   - Check com.plexapp.plugins.library.blobs.db - PASS
2025-05-30 22.25.00 - Check   - PASS
2025-05-30 22.25.39 - Repair  - Export databases - PASS
2025-05-30 22.25.47 - Repair  - Import - PASS
2025-05-30 22.25.48 - Repair  - Verify main database - PASS (Size: 169MB/169MB).
2025-05-30 22.25.48 - Repair  - Verify blobs database - PASS (Size: 254MB/254MB).
2025-05-30 22.25.48 - Making repaired databases active
2025-05-30 22.25.48 - Repair  - Move files - PASS
2025-05-30 22.25.48 - Repair  - PASS
2025-05-30 22.25.48 - Repair  - PASS
2025-05-30 22.25.48 - Reindex - MakeBackup com.plexapp.plugins.library.db - PASS
2025-05-30 22.25.49 - Reindex - MakeBackup com.plexapp.plugins.library.blobs.db - PASS
2025-05-30 22.25.49 - Reindex - MakeBackup - PASS
2025-05-30 22.25.50 - Reindex - Reindex: com.plexapp.plugins.library.db - PASS
2025-05-30 22.25.50 - Reindex - Reindex: com.plexapp.plugins.library.blobs.db - PASS
2025-05-30 22.25.50 - Reindex - PASS
2025-05-30 22.25.50 - Reindex - PASS
2025-05-30 22.25.50 - Auto    - COMPLETED
2025-05-30 22.26.17 - Exit    - Retain temp files

@Tony_T

Correct, nothing to be concerned. You’re already in the best position.

You no longer need any ā€œdeflateā€

1 Like

Just wanted to say THANKS! for @ChuckPa and compatriots for being so accessible and helpful with this tool and in particular for the recent DB hiccup. Much appreciated. :slight_smile:

1 Like

How big do these WAL/SHM db files get? Been running PMS Optimize since last night. have no idea if I’m getting close since the loading circle in Plex has been at 7-oclock the entire time

@mat_b

The WAL (Write Ahead Lookup) file will get to be about the size of its corresponding DB file (presuming it didn’t previously exist)

The SHM is the shared memory buffer are for the PMS processes.

What are you currently doing ? PMS ā€œOptimize DBā€ or something else ?

Right now PMS Opt. database

file is 51gb now, database is 56 :sleeping_face:

update: got shy of the db size and then disk activity kicked up a bunch, by 2am it was finally done, or 24hrs of running optimization. Glad that’s over.

1 Like