PMS Eating All System Memory and Spamming Logs

Server Version#: 1.18.5.2309
Player Version#: N/A (4.20.1)
OS: Debian 10 Buster
Kernel: 5.4.0-0

This morning, while I was asleep, plexmediaserver exploded in memory usage eating all system and paging memory available. Here’s a netdata screen shot:

Plex kept this memory for approximately an hour before it started releasing it:

Unfortunately, I can’t see what started this since the logs were filled with this line over and over again:

Jan 28, 2020 08:04:40.350 [0x7f6d70ff9700] DEBUG - [TranscodeOutputStream] Changed instance plex-transcode-zaluezt9a9nkb7g4p7g3v725-0de32d55-2862-4018-81cb-1e6a47487bdc which had to -1; was at chunk -1 with offset -1.000000 now at chunk -1

You can see in this screenshot that all the Plex Media Server.log files are filled completly at that exact same time. It’s just the same line over and over again:

I don’t believe anyone was watching anything, at least Tautulli doesn’t show anyone playing anything at that time. I also don’t have any scheduled tasks running at that time.

Weirdly, the logs start behaving normally again at about 8:05 am, well before memory begins to be released again. I’ll post all my logs here anyways in case anything in there helps:

Plex Media Server Logs_2020-01-28_11-28-50.zip (1.6 MB)

I recently upgraded my Kernel to try and fix a previous issue so it may be related, though I can’t be sure.

Whatever the transcoder was doing, it did NOT like the file it was processing.

From these logs, the file is corrupt. Unfortunately, due to the limited size of the logs, all traces of which file it is are gone.

Jan 28, 2020 08:04:41.249 [0x7f6d70ff9700] DEBUG - [TranscodeOutputStream] Changed instance plex-transcode-zaluezt9a9nkb7g4p7g3v725-0de32d55-2862-4018-81cb-1e6a47487bdc which had to -1; was at chunk -1 with offset -1.000000 now at chunk -1
Jan 28, 2020 08:04:41.249 [0x7f6d70ff9700] DEBUG - [TranscodeOutputStream] Changed instance plex-transcode-zaluezt9a9nkb7g4p7g3v725-0de32d55-2862-4018-81cb-1e6a47487bdc which had to -1; was at chunk -1 with offset -1.000000 now at chunk -1
Jan 28, 2020 08:04:41.249 [0x7f6d70ff9700] DEBUG - [TranscodeOutputStream] Changed instance plex-transcode-zaluezt9a9nkb7g4p7g3v725-0de32d55-2862-4018-81cb-1e6a47487bdc which had to -1; was at chunk -1 with offset -1.000000 now at chunk -1
Jan 28, 2020 08:04:41.249 [0x7f6d70ff9700] DEBUG - [TranscodeOutputStream] Changed instance plex-transcode-zaluezt9a9nkb7g4p7g3v725-0de32d55-2862-4018-81cb-1e6a47487bdc which had to -1; was at chunk -1 with offset -1.000000 now at chunk -1
Jan 28, 2020 08:04:41.249 [0x7f6d70ff9700] DEBUG - [TranscodeOutputStream] Changed instance plex-transcode-zaluezt9a9nkb7g4p7g3v725-0de32d55-2862-4018-81cb-1e6a47487bdc which had to -1; was at chunk -1 with offset -1.000000 now at chunk -1
Jan 28, 2020 08:04:41.249 [0x7f6d70ff9700] DEBUG - [TranscodeOutputStream] Changed instance plex-transcode-zaluezt9a9nkb7g4p7g3v725-0de32d55-2862-4018-81cb-1e6a47487bdc which had to -1; was at chunk -1 with offset -1.000000 now at chunk -1

The transcoder team would be interested in seeing what that is if you can catch it – and – if it’s not normal software update related.

As a point of order, not to deflect in any way, this is the second instance of a problem with Debian 10 and a Ryzen 9 3900x. IMHO, the kernel is suspect. Two different installations with different data, using the same PMS binaries everyone else uses, does make some implications which need to be examined further.

I should also note that yesterday I installed the NVIDIA drivers for the 790ti that’s in the server and enabled video encoding/decoding in PMS, so it’s entirely possible that could be a contributing factor as well.

That unfortunately makes it beyond what I can help with. External GPU drivers, or anything not included in PMS itself, does kinda fall to the person doing the install.

I’m not on site nor do I have any control over the environment. There are too many variables in flight

Fair enough. If I see it happening again I’ll try to narrow things down.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.