Cannot begin transaction. database is locked

plex uses sqlite, which is a file based database.

sqlite can only process db updates sequentially one at a time.

so transaction speed is limited to how quickly the cpu/io can process and commit the db, then move on to the next transaction.

unfortunately the larger the library, the longer it takes some database transactions to complete, and then we start seeing these notices about waiting and failed to begin transactions, because the function that is trying to update the database, cannot due to another thread or process or function is already in the middle of a db update.

The fact that you are on an SSD, which can help dramatically, and that optimization helps smooth it out for a time, indicates to me that you do have a huge library database, and it simply takes time to process updates.

Unfortunately, other than throwing more IOPS and/or CPU (depending on where the ultimate bottleneck is), there isn’t really a good solution.

if there is any write optimizations you can do for your ZFS partition, that may help some. (like disable copy on write, journaling, etc) be aware that enabling optimizations that can affect data integrity, you will want to ensure you have adequate backups of your plex db in case of any file system corruption.

for reference, my plex db is on a 40+tb 8 drive raid-6 array with btrfs.

my plex db is ~1.1 gb

Other than disable COW (which I have done), not much else I can do to optimize db IO.

I do have an external SSD drive attached for transcoding temp folder, but I don’t want to move plex externally.

In the future I want a nas with some kind of internal SSD or m2 support, to keep plex db and transcoding on.