Sqlite3: Sleeping for 200ms to retry busy DB on QNAP NAS

Server Version#: 1.32.6.7557 (in a Docker container)
Player Version#: 4.116.1 (Plex Web)

Running Plex server on a QNAP TVS-h1288X with QuTS hero h5.1.1.2488. Plex in installed via the Docker CLI and resides on a ZFS striped pool made-up of 2 x 4TB Crucial NVMe SSDs. (Plex is the ONLY application installed on a dataset in this pool.).

Was running into similar issues when Plex was first installed on this system 4+ months ago, but those issues had subsided when I moved Plex Server into its own dataset on its own ZFS pool on NVMe SSDs. The issue has returned within the last few weeks and requires the Docker container to be restarted every few days. The container’s logs always contain the message “Sqlite3: Sleeping for 200ms to retry busy DB.”. The Web Player and Android App show the libraries shared by this server as unavailable and my only option is to restart the server.

I have repaired the SQL database multiple times with the DBRepair.sh script (GitHub - ChuckPa/PlexDBRepair: Database repair utility for Plex Media Server databases) even though the scripts does not report any database errors while running. I have also recently disabled the Plex Meta Manager script but the errors persist.

Plex server logs have been uploaded.

Plex Media Server Logs_2023-10-24_14-02-04.zip (3.2 MB)

ZFS is a VERY demanding file system.

Do you have SSD trim enabled ?

Without it, you’ll lose about 1/2 of your SSD performance which is critical on ZFS RAID sets

The zpool application on QNAP h5.1.1.2488 doesn’t report a “trim” command, none of the created pools seem to have an autotrim property (sudo zpool get all | grep trim), and the GUI doesn’t explicitly offer an option for enabling trim (though there is a scrubbing option which I’ve now enabled).

Unclear what ZFS version is used:

$ sudo zpool get all | grep version
zpool1 version - default
zpool2 version - default
zpool3 version - default

All good on trim. I confirmed. trim is automatic in ZFS as it is now on QTS.


Back to the 200 ms busy DB.

Oct 24, 2023 03:47:23.096 [140518347201336] ERROR - Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:47:23.297 [140518347201336] ERROR - Saving activity history aborted with soci exception: sqlite3_statement_backend::loadOne: database is locked
Oct 24, 2023 03:47:23.297 [140518347201336] DEBUG - Activity: registered new sub-activity b1c6c8b6-f1f1-447f-b0a1-ce9485b49b1d - "ButlerTaskGenerateCreditsMarkers" parent: 66da20f5-6f8d-4d0d-a305-afcbba83357e overall progress: 99.7% (338/339)
Oct 24, 2023 03:47:23.297 [140518347201336] DEBUG - Activity: updated activity 66da20f5-6f8d-4d0d-a305-afcbba83357e - completed 99.7% - Butler tasks
Oct 24, 2023 03:47:23.297 [140518347201336] DEBUG - Butler: Scheduling credits marker creation for: 268350
Oct 24, 2023 03:47:23.608 [140518347201336] DEBUG - [CreditsDetectionManager] Running credits detection for item 268350
Oct 24, 2023 03:47:33.620 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:47:43.833 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:47:54.045 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:48:04.258 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:48:14.470 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:48:24.682 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:48:34.894 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:48:45.106 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:48:55.319 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:49:05.531 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:49:15.744 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:49:25.956 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:49:36.088 [140518352182072] DEBUG - Request: [216.245.221.83:46222 (WAN)] HEAD /web/index.html (3 live) #33ac5b TLS Signed-in
Oct 24, 2023 03:49:36.088 [140518352182072] DEBUG - [Req#33ac5b] Final path: "/usr/lib/plexmediaserver/Resources/Plug-ins-1cf77d501/WebClient.bundle/Contents/Resources/index.html"
Oct 24, 2023 03:49:36.088 [140518352182072] DEBUG - Content-Length of /usr/lib/plexmediaserver/Resources/Plug-ins-1cf77d501/WebClient.bundle/Contents/Resources/index.html is 12383 (of total: 12383).
Oct 24, 2023 03:49:36.088 [140518352182072] DEBUG - Completed: [216.245.221.83:46222] 200 HEAD /web/index.html (3 live) #33ac5b TLS 0ms 0 bytes
Oct 24, 2023 03:49:36.168 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.
Oct 24, 2023 03:49:46.380 [140518347201336] ERROR - [CreditsDetectionManager] Waited over 10 seconds for a busy database; giving up.

What else is happening on the box. This is telling me your machine’s DB is getting HAMMERED.

Do you have other plug-ins / programs like PlexMetaManager ?

As mentioned in the original post, Plex Meta Manager is disabled NOW (on Mon, Oct 23 PDT) but was previously enabled for 3+ months. If the timestamps are in UTC then this likely predates when PMM was disabled. To be sure, I’ve deleted all log files and restarted the Plex server, keeping the PMM service disabled. Will updated with new logs again if the system fails.

1 Like

Turned off PMM last week and haven’t observed any instances of:

Waited over 10 seconds for a busy database …

since. Re-enabled PMM this morning and observed the error within a few hours:

…
Plex Media Server.log:Oct 30, 2023 10:32:46.402 [139697393707832] ERROR - [Req#13def2/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
Plex Media Server.log:Oct 30, 2023 10:32:56.615 [139697393707832] ERROR - [Req#13def2/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
…

My Plex client was unable to access libraries hosted on this server and I needed to restart the server’s container to recover.

So your suspicion appears to be correct. Anything I can do on the Plex server side of things? I’ll look into throttling PMM at the app or Docker level.

Any way to throttle PMM? I’m having the same issues here.

I’m having the same issue after renaming all folders in my TV and Movies libraries to follow trash guide. Everything looks good and plays fine, but every now and then I’ll wake up to plex being unresponsive, with the logs showing several “Sqlite3: Sleeping for 200ms to retry busy DB.”

I suspect one of the scheduled tasks is causing this, since it usually happens overnight. And it did happen a few times while I was restoring all metadata during the migration. My suspicion is the “refresh local metadata every 3 days” since it doesn’t happen every day. Something about the new folder structure may be causing the parser to update the db several times? For now, I’m disabling that option. Will report back if it still happens.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.