We can’t tell anything from that except your database is extremely fragmented and in need of optimizing.
Jun 12, 2023 06:01:12.945 [23442127862584] WARN - [Req#13b9] SLOW QUERY: It took 220.000000 ms to retrieve 6 items.
Jun 12, 2023 06:01:12.946 [23442218154808] WARN - [Req#13a9] SLOW QUERY: It took 250.000000 ms to retrieve 9 items.
Jun 12, 2023 06:01:12.947 [23442125753144] WARN - [Req#13ca] SLOW QUERY: It took 230.000000 ms to retrieve 8 items.
Jun 12, 2023 06:01:12.948 [23442163481400] WARN - [Req#13ce] SLOW QUERY: It took 220.000000 ms to retrieve 8 items.
Jun 12, 2023 06:01:12.951 [23442129972024] WARN - [Req#13e4] SLOW QUERY: It took 260.000000 ms to retrieve 7 items.
Jun 12, 2023 06:01:12.953 [23442206546744] WARN - [Req#13d3] SLOW QUERY: It took 260.000000 ms to retrieve 8 items.
Jun 12, 2023 06:01:12.954 [23442138409784] WARN - [Req#13d0] SLOW QUERY: It took 270.000000 ms to retrieve 8 items.
Jun 12, 2023 06:01:12.956 [23442136300344] WARN - [Req#13e7] SLOW QUERY: It took 210.000000 ms to retrieve 7 items.
Jun 12, 2023 06:01:12.959 [23442098330424] WARN - [Req#13e6] SLOW QUERY: It took 330.000000 ms to retrieve 12 items.
Jun 12, 2023 06:01:12.960 [23442074602296] WARN - [Req#13dd] SLOW QUERY: It took 310.000000 ms to retrieve 10 items.
Jun 12, 2023 06:01:12.960 [23442152934200] WARN - [Req#13dc] SLOW QUERY: It took 320.000000 ms to retrieve 9 items.
Jun 12, 2023 06:01:12.964 [23442213935928] WARN - [Req#13a3] SLOW QUERY: It took 440.000000 ms to retrieve 18 items.
Jun 12, 2023 06:01:12.966 [23442113096504] WARN - [Req#139f] SLOW QUERY: It took 240.000000 ms to retrieve 10 items.
Jun 12, 2023 06:01:12.971 [23442159262520] WARN - [Req#13ed] SLOW QUERY: It took 230.000000 ms to retrieve 10 items.
Jun 12, 2023 06:01:12.972 [23442117315384] WARN - [Req#13e8] SLOW QUERY: It took 270.000000 ms to retrieve 10 items.
Jun 12, 2023 06:01:13.427 [23442202327864] WARN - [Req#1516] SLOW QUERY: It took 230.000000 ms to retrieve 13 items.
Jun 12, 2023 06:01:13.438 [23442115205944] WARN - [Req#1526] SLOW QUERY: It took 240.000000 ms to retrieve 22 items.
Jun 12, 2023 06:01:13.442 [23442121534264] WARN - [Req#1525] SLOW QUERY: It took 260.000000 ms to retrieve 30 items.
Jun 12, 2023 06:01:17.992 [23442078821176] WARN - [Req#172f] SLOW QUERY: It took 220.000000 ms to retrieve 10 items.
Jun 12, 2023 06:01:18.002 [23442123643704] WARN - [Req#174f] SLOW QUERY: It took 260.000000 ms to retrieve 11 items.
Jun 12, 2023 06:01:18.005 [23442147367736] WARN - [Req#1748] SLOW QUERY: It took 270.000000 ms to retrieve 10 items.
Jun 12, 2023 06:01:18.006 [23442076711736] WARN - [Req#174e] SLOW QUERY: It took 290.000000 ms to retrieve 10 items.
Because of the severity, I recommend doing it with PMS stopped.
Give this a run from INSIDE the container.
You’ll want to run is as ‘root’ and do “auto” .
It’ll put your DBs back in the best physical order / optimization it can be.
(this is a level of optimization PMS can’t do while it’s running)
Details and examples are in the README file. Please take a minute to read it.
Thanks. I downloaded to the container, stopped PMS, and ran the auto option. I’ve turned debug logging on (I thought it was encouraged to keep that off), and will monitor if the issue resurfaces.
@ChuckPa would you mind taking a look at latest logs. After running DBRepair, I got 3 days of stable run. But experienced a semi crash yesterday, and another today. Plex Media Server Logs_2023-06-19_19-32-39.zip (4.9 MB)
One of the crashes I was able to induce myself. I was scrolling through the server’s library, loading a ton of artwork. While scrolling suddenly the artwork stopped appearing. I have a ping logger that tells me when the server is unresponsive, which notified me at the same moment.
Here’s what the container logs show on the latest crash (note the “sleeping for 200ms” was repeated many many times, I just pasted once):
Sqlite3: Sleeping for 200ms to retry busy DB (REPEATED MANY TIMES)
Sqlite3: Sleeping for 200ms to retry Dolby, Dolby Digital, Dolby Digital Plus, Dolby TrueHD and the double D symbol are trademarks of Dolby Laboratories.
The commandline used was:
“/usr/lib/plexmediaserver/Plex Commercial Skipper”
–ini=/usr/lib/plexmediaserver/Resources/comskip.ini
–output=/mnt/nas/tv_shows/.grab/1f69c00ff68f9f6125d5101250ff5f2c82d524a8-de1db42864e9566b4f5b53b04c5577adf19f2f00
-t
–quiet
“/mnt/nas/tv_shows/.grab/1f69c00ff68f9f6125d5101250ff5f2c82d524a8-de1db42864e9566b4f5b53b04c5577adf19f2f00/NBC Nightly News With Lester Holt (2015) - S2023E168 - NBC Nightly News With Lester Holt.ts”
Setting ini file to /usr/lib/plexmediaserver/Resources/comskip.ini as per commandline
Using /usr/lib/plexmediaserver/Resources/comskip.ini for initiation values.
Auto selecting the PID.
Commercials were found.
I only record one show, which records at the same time every day. I’ve experienced the crash issue at different times, so I have not suspected it to be DVR related. This latest crash, seems to have crashed during the post processing of a recorded show.
I suspect that the plex process is in a somewhat unresponsive state, but perhaps not formally crashed. Is there something that I could do when it becomes in that state the next time to help diagnose?
No, no post processing script. Just confirmed it’s blank in the DVR Settings.
EDIT: Overall, nothing has really changed on my server in quite some time other than the size of my library growing. The server is stable, on modern hardware. Plex container is running on a fast mirrored NVME drives.
@ChuckPa Any suggestions on how I could assist with the debugging in the moment of the server being unresponsive? I’m in one of those circumstances now, the server has stopped responding with “Sqlite3: Sleeping for 200ms to retry busy DB.” messages in the docker log. When I log into the console for the container, I see that the Plex Media Server process is running, and is not defunct.
I don’t see a lot of CPU usage via the console. I cannot attach debug logs because I cannot access Plex Web interface to access them.
When the DB gets big compared to the per-thread speed of the processor AND where it’s stored -or- the I/O can’t keep up, there will be delays getting in and out of the DB.
Seeing a few of these is very normal now for HDD systems.
Depending on your implementation, I can probably help you resolve it in a few different ways.
FWIW: I have my test server running on HDD (1.3 GB/sec RAID 6 array) with about 350,000 indexed items and only see this when:
Adding a lot of new media to the database
It’s performing analysis at night (again, heavy DB activity)
Update on this - I have not experienced a crash in several days, so good news. I understand your comments above, but I want to provide feedback that I do think this was a failure - PMS was not responding despite the log messages of brief sleeping to wait for the DB. I think the DB busy logs were an indicator that the problem was occurring when PMS was performing a heavy operation, but not the root cause. I gave PMS time to recover and it never came back without a hard restart of the container.
I attribute the no crashes to either:
Update in PMS, as there have been a few since the problem started.
I use Plex Meta Manager to update posters. I removed the custom posters and had the default posters reapplied.