Plexmediaserver on linux just constantly corrupts its own database

Server Version#: 1.19.5.3112-b23ab3896 (installed via apt repo)

I’m running what I think is the latest plex package, but for the past few months I’ve noticed this issue where plex just corrupts its own database like once a week, and I have to go do the recovery steps listed in the support wiki. once, this corruption was so bad I had to rescan like half my media library, and it lost all my show watched information.

I see plex is using sqlite3 internally for these DBs, which isn’t really a great choice, but should at least be data safe. it seems pretty obvious that plex is doing something unsafe with sqlite. Any chance the devs could replace it with something decent like PostgreSQL, or at least go through the upstream guide on how to avoid db corruption? (https://www.sqlite.org/howtocorrupt.html)

There’s an existing feature suggestion for this; based on the number of votes it doesn’t seem to be a very popular one. You may want to vote over there though.

2 Likes

How many items do you have?

I’ve ran Plex on Linux for years and yeah once a year sometimes twice I get some corruption but mainly when tons of items are added at the same time.

If I may augment here ?

1 Like

Ahhh chuck/nokdim, I’m getting close to that number for sure. just did a quick check, 2574 video files, 5017 music files. it def feels like a sqlite3 issue, it’s just not appropriate for this kind of workload.

note that this is hosted on a zfs raid10 pool, and it has never once noticed any actual corruption of the files from Linux’s point of view, so this is definitely something that sqlite3/plex is messing up. :frowning:

I know it may seem like a SQLite3 issue, but it’s not. (see referenced link).

Thanks for sharing that you’re using ZFS. I know know what the problem is.

I presume you have it set to compress (default). This is the root problem.

PMS is in and out of the database in VERY fast & quick transactions.
Simple fact - ZFS can’t keep up with the transactional load in most cases.

Test: Move it to Ext4, XFS, or turn the compression OFF. You’ll instantly see a difference.

Why this is so? ZFS is supposed to know when to give up attempting to compress the data. It doesn’t adapt quickly enough. By that time, it’s too late.

hmmm, those are very odd blanket statements, given I’ve got a 4 drive raid10 made of SATA SSDs with an NVME SSD in front as a ZIL and L2ARC. I’ve certainly had nothing but fantastic IO performance with everything else on my system, including other databases.

If there were IO speed issues due to having compression enabled, the database should still just queue up writes and finish them as fast as I can. it should never lose data or corrupt itself, unless things are being done very very wrong. I’d just expect user sessions to get a little slower in that case, not have the database destroy itself.

in any case, I’ll make a ZFS mount with compression turned off for the plex DBs, but regardless, I’d suggest you have some serious bugs to fix.

Could you clarify if the Plex directory is local to the host? If you are running on a network share then it will still get corrupted as well.

yep, it’s a local filesystem. and while I’ve been running ZFS for years, this problem has only started occurring the last few months, so it appears to be a relatively recent regression.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.