Library.db size more than doubled in latest version

v.1.41.8.9834-071366d65 is released, does it / is it supposed to fix the bloating issue as promised? If yes, how? By optimizing database?
No mention about procedure in the patch notes.

3 Likes

So I noticed this today.. I ran ChuckPa tool.. DB was well below a gig last time I had looked, had grown to over 10GB..

I am now back to reasonable size of 557MB after auto, and then vacuum.

Thanks @ChuckPa

Currently running 1.41.8.9834, so hope it doesn’t grow out of control again.

@mrjohnpoz

“auto” does far more than what “vacuum” does. It does a full export + reload .
Vacuum after Auto is redundant. (you’re compacting an already compacted DB)

Auto does a full export (sorted perfect order). It then writes a new, fully sorted and compacted DB.

Vacuum only removes the deleted records from the DB without physically reordering records.

That’s why “Auto” is slower than Vacuum.

For future reference, Auto will be gaining functionality soon.
I’m working on making it smarter and faster .

5 Likes

Yes.

It will clean up the database during the scheduled tasks.

3 Likes

Yeah I know, I ran auto first - but the db was still showing 10GB in size.. I then ran vacuum after the auto had completed and it shrunk it down to 500MB size.

025-05-28 22.16.04 - Repair - Verify main database - PASS (Size: 10592MB/557MB).

2025-05-28 22.18.38 - Vacuum - Vacuum main database - PASS (Size: 557MB/557MB)

I was looking at the dir where the db resides after it completed, and still showed a 10GB file.. And I know I refreshed (at least I am pretty sure I did).. After vacuum it showed 500MB.. I too thought vacuum didn’t need to be run after auto - but in the dir was still seeing the 10GB size after tool completed.

Thanks @ChuckPa! DBRepair got me back in order. 42gb down to 1.4gb. :slight_smile:

Unfortunately auto wouldn’t work for me in DBRepair as it said I didn’t have enough space even with 311GB free. Manually set it vacuuming and it’s still going. Looks like my db is over 100GB and no other apps are crashing due to low space. Just hope the process completes.

4 posts were split to a new topic: PMS will not run on Synology

The sizes here are: (Size: Before MB / After MB)

That shows Repair (part of Auto) did the reduction.
Started with 10592 MB and completed with 557 MB.

Disk space won’t be returned until all the temp/in-process files in dbtmp directory are removed. This happens when you EXIT and reply YES to cleaning up.

If you QUIT, then the temp files / in-process backups are left behind in dbtmp

This is how I ensure being able to recover from a power failure / other interruption at any point during the process.

I am going to be changing the menu slightly.

Currently 99 = quit, and exit requires the command be typed.
I’ll be changing that to 98 = quit and 99 = exit (add a numerical exit option)

I see a number of cases where folks still use the numbers and forget ‘quit’ doesn’t cleanup after itself (deliberately)

Hopefully, the upcoming tweak to the menu will help avoid future ambiguities.

1 Like

Unfortunately that’s correct behavior. If the base database is 100 GB,
(it doesn’t know what type of reduction, if any, it will get) then

  1. 150 GB for the ascii (exported) version of the DB
  2. 100 GB for the incoming new DB
  3. 100 GB intermediate during Reindex
  4. 100 GB for the saved original

That’s a potential 450 GB space requirement.

If this is on a NAS or other large disk, it’ll be fine.

Please understand, the script is just that – a shell script .
It was never intended to deal with such outrageous files in such confined working environments as 311 GB free,

Hundreds of free GBs is not exactly a small size/amount allocated for a db that would probably take decades to reach that size, so I wouldn’t point fingers in that direction or anything that has to do with users. This is an almost inexcusable mistake on the developers’ side.

Personally, I’ve started dbrepair earlier today and 5-6 hours later it’s still at “Performing DB cleanup tasks.” with less than 50GB left on the drive.

I’m just fuming now and at this point Plex is just a couple of mistakes away from me walking away from this service.

3 Likes

Hi drzoidberg33, thanks for the reply.
My db ballooned to 155GB, how much free space do I need to perform the maintainance?

I am not employed in any way with Plex. This just my 2 cents.

  1. This was a Beta update, not stable. If you aren’t willing test potentially unsafe releases, use the stable updates.
  2. The DB-Repair tool is NOT an official tool from Plex. It is a 3rd-party tool written by a Plex team member.
  3. If you are having issues with shortage of space you can try a few other fixes that people have used. See the links to higher up in this thread. Or you can roll back to an earlier backup.

changing the journal_mode

dropping the table

I only used the method that ChuckPA used in his script, but others have had success with the other methods. Good Luck.

3 Likes

I fixed the problem with going back to a db-backup but the argument that this was a beta update and the user is responsible for it is somewhat irritating. I’m on the public update channel and got notified that there was an update. I redeployed the container to update and voila, there is the bugged update with a really serious bug that can mess up a installation real bad. I don’t have the plexapp folder on my media drives but on a fast but smaller ssd.

Sure, in a perfect world every user would read through every update/patch notes; you’re right. But here in reality that’s not the case and most of us paid the dev team to not have to think about these details but to easily deploy a solution for our media at home.

1 Like

I downgraded to 1.41.6.9685 because of various issues, but mostly because of the database growth issue.
But now I see that it has still happened again. The latest database is now 39GB (from aprox 300 mb) and it is making plex unable to do anything.

I am not that techsavy to use the repairtool but I thought downgrading and just deleting the previous huge database was gonna fix this. How is this still happening?

  1. I’ve been on the beta channel for almost a decade now. There have been some issues here and there over the years, I’ve never had a problem with that. However, considering its impact (and how easy it is to notice and reproduce), this kind of issue goes far beyond the average bug that can slip into a beta.
  2. Of course. I’m aware.
  3. I hope it doesn’t get to that. Thank you very much for those resources, nonetheless!
1 Like

I was blissfully unaware of this issue until today when I could no longer access PMS. I finally discovered I had run out of disk space and my database files have now reached 3.5 TB. Insanity. I haven’t added a thing to my sever in over a month. VERY frustrating.

Can someone confirm the latest version that does not exhibit this issue so I can either downgrade or upgrade.

I’m running the binhex container on unRaid which only uses stable branch releases, my DB is up to 21GB. Anyone know if this could be related or something else? Having my server hang when adding stuff stating the db is locked up. I don’t recall what size it was in the past few weeks.

Running DBRepair right now, will report results.

Ah, that explains it. Plex is installed on the 1TB system SSD as opposed to the 72TB RAID drive. Eventually it said vacuuming finished for the main database but then failed and gave an error on the blobs one and left the main at the original size. Not sure how to tackle this now.

2025-05-29 14.19.25 - Vacuum  - Vacuum main database - PASS (Size: 96902MB/96902MB).
2025-05-29 14.19.25 - Vacuum  - Vacuum blobs database - FAIL (14)
2025-05-29 14.19.25 - Vacuum  - FAIL

Thanks for the clarification - while I thought it was done most likely it was back at the prompt, etc. But prob just didn’t refresh the directory - or did it too soon..

But running vacuum sure wouldn’t have caused any harm, when nothing to vacuum it seems.

Thanks again for such a useful and robust tool..