Issues with my Plex server

Server Version#: 1.31.3.6868
Player Version#: 4.104.2 (Chrome)
Server Log : Plex Media Server Logs_2023-04-09_16-59-04 - Google Drive

Hey,

I hope I am posting in the right section…

I have recurring problems with my plex media server, hosted on a Whatbox.ca seedbox

For a few days I have had server crashes, let me explain:
I have the impression that after a scan, the server crashes, but my uptime robot page still displays the up server but it is unreachable from plex.

Another problem, I can no longer invite anyone to my server. As soon as I enter a friend’s email or ID to add to the server, it tells me “something went wrong while sharing” and the invitation does not go away.

I uploaded the server logs here hoping someone can help me.

Thank you guys for helping me !

Since today i have already a crash of server. Can’t broswe my media on it whitout error and timeout.

The log say :
Apr 18, 2023 10:49:01.493 [0x7f269111bb38] Erreur — Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:01.536 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:01.702 [0x7f269258fb38] Erreur — Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:02.036 [0x7f268e7a8b38] Erreur — [ChildProcessMonitor] Plex Tuner Service exited 3 times in less than 60 seconds; giving up.
Apr 18, 2023 10:49:11.706 [0x7f269111bb38] Erreur — Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:11.751 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:11.917 [0x7f269258fb38] Erreur — Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:20.789 [0x7f269258fb38] Erreur — Unknown metadata type:
Apr 18, 2023 10:49:20.936 [0x7f268ff00b38] Erreur — Unknown metadata type:
Apr 18, 2023 10:49:21.964 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:30.816 [0x7f2690f18b38] Erreur — [EventSourceClient/mediaserver/86-213-69-196.c4838cfd85e8494e94b08ff40199e9cc.plex.direct:32403] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:30.951 [0x7f268ff00b38] Erreur — Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:32.177 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:41.030 [0x7f2690f18b38] Erreur — [EventSourceClient/mediaserver/86-213-69-196.c4838cfd85e8494e94b08ff40199e9cc.plex.direct:32403] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:41.165 [0x7f268ff00b38] Erreur — Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:42.390 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:49:52.602 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:50:02.814 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.
Apr 18, 2023 10:50:13.028 [0x7f26928acb38] Erreur — [EventSourceClient/mediaserver/185-203-56-28.8c4a4a4e835b4ff79c1219800be71d10.plex.direct:15026] Waited over 10 seconds for a busy database; giving up.

Anyone know what i do to fix that ?

Please post your entire server log zip.

Sure, thank you, here is the log folders :
Plex Media Server Logs_2023-04-18_11-11-43.zip (3.3 MB)

So I did some test again and it looks like I had two instances of PMM running at the same time on my server…
I just removed both of them to see if this is not what is causing some kind of I/O issues on the ssd…

Actually, the problem was not related to that. I always have these errors in the consoles and I have the impression that the server crashes every time it scans a series.

I have the impression that it crashes on the scan of Dragon ball Super.

Here are the logs after the manual reboot

Plex Media Server Logs_2023-04-18_14-50-46.zip (3.8 MB)

@mikelima I am looking thru the logs they cover: Apr 18, 2023 14:11:05.349 to Apr 18, 2023 14:50:45.453 did it become unreachable during that period? if so about when?

@dbirch I think the crash was around 14:14

Last night and this morning no issues with the server. This is really random…

I have already this problem. Plex crash randomly and i don’t know why.
This night, Plex crash at 21:33.
My friends can see my library in Plex (on the left side) but the home screen have infinite loading, and some of them have an error message “something wrong…” and can’t load the main home screen of plex.

Here is the log file.

Everytime i need to restart it from my seedbox admin panel.

Plex Media Server Logs_2023-05-09_21-33-59.zip (2.8 MB)

@anon18523487 Any ideas?

Looking at the logs I am seeing:
May 09, 2023 21:04:48.216 [140298183961400] DEBUG -
May 09, 2023 21:33:48.230 [139786351315768] INFO -

The activity around the event looks normal, from what is described it sounds like the application is OOMing, or deadlocking somewhere.

I have this same issue while Media Scanner is running. The scan “locks up” in subfolders as it scans through TV shows and takes forever. I keep getting database lockups and busy/held transaction messages throughout logs.

Navigating in the web or Android TV client takes a while and often ends up with a “Something went wrong” and “An unexpected error occurred.” message.

If I stop the scan it returns back to normal. If I restart the service, it may or may not return back to normal (depending on what it does on startup).

This started happening a few months back or so and nothing I’ve done (reinstall, db repairs, setting various cache sizes, moving db to memory, etc) helps the issue.

 May 11, 2023 10:09:04.499 [140522153052984] ERROR - [Req#62f95] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:09:14.713 [140522153052984] ERROR - [Req#62f95] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:09:24.929 [140522153052984] ERROR - [Req#62f95] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:09:35.142 [140522153052984] ERROR - [Req#62f95] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:09:45.357 [140522153052984] ERROR - [Req#62f95] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:09:55.572 [140522153052984] ERROR - [Req#62f95] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:09:58.737 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.570000 seconds
May 11, 2023 10:09:58.738 [140522115926840] WARN - [Grabber] Took too long (56.600000 seconds) to start a transaction on /data/jenkins/server/3526496464/MediaProviders/Grabbers/MediaGrabber.cpp:132
May 11, 2023 10:09:58.738 [140522115926840] WARN - [Grabber] Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:09:58.739 [140522143472440] WARN - Took too long (56.550000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:09:58.739 [140522143472440] WARN - Transaction that was running was started on thread 140522115926840 at /data/jenkins/server/3526496464/MediaProviders/Grabbers/MediaGrabber.cpp:132
May 11, 2023 10:10:29.358 [140522115926840] ERROR - [Req#185ae/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:10:39.579 [140522115926840] ERROR - [Req#185ae/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:10:44.773 [140522074479416] INFO - Request: [10.1.1.145:54137 (Allowed Network (Subnet))] OPTIONS /:/prefs (6 live) #636bc TLS GZIP Signed-in Token ()
May 11, 2023 10:10:44.773 [140522278873912] INFO - Completed: [10.1.1.145:54137] 200 OPTIONS /:/prefs (6 live) #636bc TLS GZIP 0ms 376 bytes (pipelined: 3)
May 11, 2023 10:10:45.290 [140522153052984] INFO - [Req#636c7] AutoUpdate: no updates available
May 11, 2023 10:10:49.801 [140522115926840] ERROR - [Req#185ae/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:10:56.165 [140522074479416] ERROR - [Req#636d3] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:11:00.016 [140522115926840] ERROR - [Req#185ae/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:11:06.378 [140522074479416] ERROR - [Req#636d3] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:11:10.229 [140522115926840] ERROR - [Req#185ae/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:11:15.163 [140522120145720] WARN - NAT: PMP, timed out waiting for response.
May 11, 2023 10:11:15.234 [140522120145720] WARN - MPM: ignoring preferred-interface pref due to no matching valid address
May 11, 2023 10:11:16.592 [140522074479416] ERROR - [Req#636d3] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:11:20.442 [140522115926840] ERROR - [Req#185ae/ViewStateSync] Waited over 10 seconds for a busy database; giving up.
May 11, 2023 10:11:24.832 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.440000 seconds
May 11, 2023 10:11:24.832 [140522143472440] WARN - Took too long (56.330000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:11:24.832 [140522143472440] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:12:50.966 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.240000 seconds
May 11, 2023 10:12:50.966 [140522076588856] WARN - Took too long (56.180000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:12:50.966 [140522076588856] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:14:17.086 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.260000 seconds
May 11, 2023 10:14:17.087 [140522118036280] WARN - [NSB/SSDP/Grabber] Took too long (65.570000 seconds) to start a transaction on /data/jenkins/server/3526496464/MediaProviders/Grabbers/MediaGrabber.cpp:132
May 11, 2023 10:14:17.087 [140522118036280] WARN - [NSB/SSDP/Grabber] Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:14:17.087 [140522143472440] WARN - Took too long (56.170000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:14:17.087 [140522143472440] WARN - Transaction that was running was started on thread 140522118036280 at /data/jenkins/server/3526496464/MediaProviders/Grabbers/MediaGrabber.cpp:132
May 11, 2023 10:15:43.184 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.270000 seconds
May 11, 2023 10:15:43.184 [140522120145720] WARN - Took too long (56.160000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:15:43.184 [140522120145720] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:17:09.334 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.320000 seconds
May 11, 2023 10:17:09.335 [140522120145720] WARN - Took too long (56.200000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:17:09.335 [140522120145720] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:18:35.467 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.300000 seconds
May 11, 2023 10:18:35.468 [140522124077880] WARN - Took too long (56.190000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:18:35.468 [140522124077880] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:20:01.559 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.260000 seconds
May 11, 2023 10:20:01.559 [140522120145720] WARN - Took too long (56.150000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:20:01.559 [140522120145720] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:21:27.556 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.180000 seconds
May 11, 2023 10:21:27.556 [140522076588856] WARN - Took too long (56.060000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:21:27.556 [140522076588856] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:22:53.575 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.180000 seconds
May 11, 2023 10:22:53.575 [140522143472440] WARN - Took too long (56.060000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:22:53.575 [140522143472440] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306
May 11, 2023 10:24:19.646 [140522178800440] WARN - Held transaction for too long (/data/jenkins/server/3526496464/Library/MetadataCollection.cpp:481): 83.240000 seconds
May 11, 2023 10:24:19.647 [140522076588856] WARN - Took too long (56.130000 seconds) to start a transaction on /data/jenkins/server/3526496464/Statistics/StatisticsManager.cpp:301
May 11, 2023 10:24:19.647 [140522076588856] WARN - Transaction that was running was started on thread 140522178800440 at /data/jenkins/server/3526496464/Library/MetadataCollection.cpp:306

MODERATOR EDIT: Readability

@almighty

Can’t begin to diagnose that. Need the DEBUG logs which capture it happening.

Plex Media Server.log (396.8 KB)

DEBUG log attached - caught it midstream (i navigated in the UI and it timed out with an error).

@ChuckPa

Another crash today (4:36 pm on the server local time)

Here is the logs

I have reboot the server 2 times.
I have a lot of file transfer (animated series) it is possible that there was a scan at that time
Plex Media Server Logs_2023-05-11_16-40-22.zip (3.1 MB)

I can’t tell which CPU this is.
I can’t tell the parameters used for the playback start

All I can see is one case where it’s attempting to select a subtitle.

what can you tell me about the machine?
how many media items are indexed ?
OS it’s running on? In a VM?

If you prefer to share full logs ZIP privately, I will open a PM.

For me,
The server run on a Whatbox.ca seedbox. I don’t know exaclty what kind of CPU they are using on the main node (looks like it’s a 256 thread threadripper but not sure about that).

My library has this:
Movies: 1296
4K Movies: 1681
Anime TV: 1078
TV: 3022
4K TV: 328

For the OS, here is the output of an lsb_release command ;

LSB Version: n/a
Distributor ID: Gentoo
Description: Gentoo Linux
Release: 2.13
Codename: n/a

Thank for helping us ChuckPA

Have another crash this evening at 20:49 approximately.

Here is the logs

I don’t know why but this time the server is “really” down because my uptimerobot page indicated it as down. The previous times, the server was down on plex but up on my uptimerobot page.

Plex Media Server Logs_2023-05-11_20-59-16.zip (4.8 MB)

what does the system log show?

PMS logs show nothing other than ‘stopping’.

Unfortunately, I’m not able to get system logs (even after asking Whatbox support)…

A crash this morning
Plex Media Server Logs_2023-05-12_08-30-20.zip (1.3 MB)

This time I’m pretty sure it was after a scan of the animes show library.
The crash took place around 8:27am and i had to reboot the server manually from the whatbox admin panel.

Could this have something to do with adding content via a remote Rlcone?