Cannot begin transaction. database is locked

Server Version#: 1.13.8.5395
Platform: Docker (plexinc/pms-docker:plexpass)
Player Version#: N/A
Logs: https://hastebin.com/agecepadix.sql

I’m seeing a TON of instances of logs like below:
WARN - Unable to load episode file [“seasons/1/episodes/7.xml”]

With a smattering of:
WARN - Waited one whole second for a busy database.
ERROR - Failed to begin transaction (…/Statistics/StatisticsManager.h:191) (tries=1): Cannot begin transaction. database is locked

An optimize DB always seems to fixed it for a short time.

I’ve had DB errors for years now, both on PlexPass and Public releases, and no one has been able to tell me why. Can someone help me identify why this is happening? My Plex metadata folder lives on a ZFS mirrored SSD, so IOPS should not be an issue whatsoever (hdparm and dd tests confirm this). Optimizing seems to fix it for a short time, but these errors will return.

Sometimes I restart the Docker container = fixed
Sometimes I optimize the DB = fixed
Sometimes I do nothing and it doesn’t impact the server at all.
Sometimes PMS ends up crashing out entirely and requires a full restart.

I’ve also seen this topic in the forums and Reddit frequently, with no one from Plex ever able to provide a solid solution.

plex uses sqlite, which is a file based database.

sqlite can only process db updates sequentially one at a time.

so transaction speed is limited to how quickly the cpu/io can process and commit the db, then move on to the next transaction.

unfortunately the larger the library, the longer it takes some database transactions to complete, and then we start seeing these notices about waiting and failed to begin transactions, because the function that is trying to update the database, cannot due to another thread or process or function is already in the middle of a db update.

The fact that you are on an SSD, which can help dramatically, and that optimization helps smooth it out for a time, indicates to me that you do have a huge library database, and it simply takes time to process updates.

Unfortunately, other than throwing more IOPS and/or CPU (depending on where the ultimate bottleneck is), there isn’t really a good solution.

if there is any write optimizations you can do for your ZFS partition, that may help some. (like disable copy on write, journaling, etc) be aware that enabling optimizations that can affect data integrity, you will want to ensure you have adequate backups of your plex db in case of any file system corruption.

for reference, my plex db is on a 40+tb 8 drive raid-6 array with btrfs.

my plex db is ~1.1 gb

Other than disable COW (which I have done), not much else I can do to optimize db IO.

I do have an external SSD drive attached for transcoding temp folder, but I don’t want to move plex externally.

In the future I want a nas with some kind of internal SSD or m2 support, to keep plex db and transcoding on.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.