Plex was acting up by not playing anything, and when I checked it out with procmon, it was spending huge amounts of CPU and disk reading/writing to com.plexapp.plugins.library.db-wal (which is now 8GB). When that finished (took about 10 mins), I noticed the com.plexapp.plugins.library.db file was 28GB - pretty large! Same for all the backups.
My plex library isn’t that large - I only have about 800 songs and 80 shows. I think that plex stopped working (didn’t work at all as a server) when it started making the latest backup (the times sort of line up) - is this database size normal, or is there any way to fix it?
Have a look at the statistics_media table in your database and see if there are many relatively recent writes with a null account_id column. If so, you may be able to mitigate the issue by deleting those entries. See this thread for (much) more detail:
This sounds right - trying a COUNT(*) on the table is completely stalled - could you give me the SQL you used to delete the rogue entries? I assume it’s something like:
DELETE FROM statistics_media WHERE account_id IS NULL
But I’d rather be sure since I cannot actually check the DB (I have backups, but I estimate this operation will take a long time, so I’d rather not have to redo it).
That SQL statement will work, it’s what the user in the other thread I linked used (see the OP’s last post). When I had the issue I used DB Browser to sort the data by account_id and deleted all rows where it was NULL. My database hadn’t grown nearly as large as yours has, so I don’t know if that’s feasible in your case.
I’d recommend following a procedure similar to what the user in the thread I linked used. It’s basically the database repair procedure with the DELETE statement inserted before the integrity check.
According to a developer, a relevant bug was already fixed months ago. So let’s hope this has been growing for quite a while and the affected users only noticed it now, while their server versions already had the bug fix.
There has been another related issue for users who were using the 3rd-party Python Plex API. AFAIR it needed to be updated as well.
Yes, I probably should have expanded on that a bit more; however, it was discussed in the thread I linked. When this occurred on my system, the issue didn’t persist through restarting the server and signing out all of my known client devices. I also cleaned up my authorized devices list in the process.
Great success! That took many hours as I estimated (around 70% of that time was writing to the WAL - I could probably have done without that), but it worked. A real slimming…
I’m the user from the other linked thread that posted about something similar. I was lucky because something else unrelated caused me to take a look at my database folder - I was wondering why the optimize took so long when usually it was so fast. I thought it was maybe my 50K photos, but I went and checked anyway. Glad to know I wasn’t alone in this.
Also here’s to hoping the problem was fixed. I have a nightly backup cron that syncs my library to a mirror drive. I added a step to that script that logs my database file size, so I can now go look at the size of it over time in the log file.
It looks like my DB may have added 22 MB in the last 3 days, so I think it might have started up again. I’ll look at it more closely this weekend and continue my thread if I’m seeing more of the same.