Skip TV Show Intros - Scanner Memory Leak on PMS?

Server Version#: 1.24.5
Player Version#: irrelevant at this case
QNAP: TVS-672X with i5 8400T and 32GB RAM
Disks: 3xIronwolf 10TB (Movies/Data RAID5) + 2x1TB SATA SSDs for QTS+Apps, also PMS (RAID1)

Hi there,

I’m really interested on this feature, but actually I can’t use it. It doesn’t generates any errors inside PMS so far and it also detects intros as expected. But If I enable it, all other services on my NAS suffers on memory pressure. InfluxDB becomes slow because of cache flush. VMs becomes insanely unusable because all commited RAM becomes swapped out to HDD Swap partition which only will be used if physical memory and also SSD Swap partition are depleted.
The bad thing is, this behavior isn’t only while the scanner is running. Also afterwards. I must reboot to get back my expected system performance. Quite annoying…

On my old QNAP (TS-853A with 16GB RAM) my SSD-Cache becomes corrupted due to this behavior (maybe the corresponding driver crashed)

Is it possible that the scanner proceeds with a too high degree of parallelism?

My normal memory usage on QTS is about 40% of 32GB. Therefore there are about 20GB available on demand for Plex. Is that not enough?

Many thanks for assistance :slight_smile:

Maybe @ChuckPa has some insight into this. It sounds to me though that clearly something is corrupted or misconfigured. I run Skip Intro on my Qnap TS-453D with 8GB of RAM with no issue unless I add a huge amount of shows, even then it only slows down for a moment. Your machine is WAY more capable then mine.

Edit: I just notice the size of your storage also, which leads me to believe its not a ton of content stored on your NAS. I run 96TB worth of storage in two RAID5 volumes. My TV library alone is around 11-12TB. Really makes me think something has gone wrong in the database or something.

This is an interesting configuration.

  1. InfluxDB - which is designed for real-time telemetry/sensor data, being used for ?

  2. The Plex scanner is single-section threaded. It has 3 threads; Scan directory & parse names - Match names at Plex.tv and await results - Update the local database.

  3. Intro detection will be single threaded as well because it’s a background task.

I am concerned about managing expectations using the N3150/N3160 CPU which is 1100 passmarks compared to the J4125’s 3000 passmarks in the TS-453D and

Running Plex
-vs-
What the InfluxDB is used for.

Good Morning,

My QNAP is my Homeserver which serves lots of micro services for IOT home automation tasks. Influx is used to log all things like metrics from heater system states, photo voltaik management and so on, which I visualize afterwards in grafana.
All these micro services are running in docker-compose containers with by docker limited resources. InfluxDB e.g only uses around 1-2GB of RAM and is limited to 3GB. In sum there are 17 Docker-Containers. All Containers together are not using more than 4GB-RAM.
17 Containers sounds a lot but most of them are doing nearly nothing. All together would by fit to run on a Raspi Gen4

System load over day is averaged ca. 4% on CPU and 12GB of memory.

PMS is running natively on QTS (not in limited docker environment).

As @Blkbyrd said: My amount of Video Content is nothing to speak of.
Around 500 Movies and 14 TV-Shows with ~ 1400 single files.

Intel N3160 was my old NAS. Actually I have an i5 8400T (6core) with around 7500 at passmark and a quite strong UHD 630 iGPU

I’m not sure what’s about the root cause. But it’s reproduceable.

Intro scanner active → massive and persistent System slowdown until reboot.
Intro scanner inactive → all is running fine :slight_smile:

Are there any further suggestions?

Today (Tonight) I also got this QuLog Message while Plex maintenance jobs were running.

And remember, there are 32GB (20GB available).

Here the last log lines from pms log while maintenance run.

Dec 09, 2021 21:58:32.207 [0x7f2943f74b38] ERROR - SSDP: Error parsing device schema for http://192.168.178.35:9080
Dec 10, 2021 01:02:23.983 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 4 images, but found 1
Dec 10, 2021 01:02:24.116 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 10 images, but found 0
Dec 10, 2021 01:03:42.928 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1577 images, but found 9
Dec 10, 2021 01:05:06.714 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1687 images, but found 181
Dec 10, 2021 01:06:32.044 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1743 images, but found 0
Dec 10, 2021 01:07:56.950 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1732 images, but found 0
Dec 10, 2021 01:09:15.771 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1597 images, but found 9
Dec 10, 2021 01:10:38.362 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1627 images, but found 8
Dec 10, 2021 01:11:59.133 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1667 images, but found 0
Dec 10, 2021 01:13:19.608 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1716 images, but found 0
Dec 10, 2021 01:14:38.406 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1657 images, but found 9
Dec 10, 2021 01:16:03.644 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 1868 images, but found 0
Dec 10, 2021 01:16:46.373 [0x7f29451e8b38] ERROR - BaseIndexFrameFileManager: expected 12021 images, but found 1830
Dec 10, 2021 01:16:46.687 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:16:46.687 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:16:54.716 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:16:54.716 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:16:54.717 [0x7f29451e8b38] INFO - CodecManager: obtaining decoder 'aac_lc'
Dec 10, 2021 01:17:02.823 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:02.823 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:10.411 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:10.411 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:18.206 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:18.207 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:25.956 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:25.956 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:33.701 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:33.701 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:41.604 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:41.604 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:54.739 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:17:54.739 [0x7f29451e8b38] WARN - MDE: unable to find a working transcode profile for video stream
Dec 10, 2021 01:42:16.770 [0x7f2943932b38] WARN - SLOW QUERY: It took 300.000000 ms to retrieve 8 items.
Dec 10, 2021 01:42:21.968 [0x7f2943afab38] WARN - Took too long (0.210000 seconds) to start a transaction on ../Statistics/StatisticsManager.cpp:249
Dec 10, 2021 01:42:22.161 [0x7f2943afab38] WARN - Transaction that was running was started on thread 0x7f29439e2b38 at ../Statistics/StatisticsManager.cpp:252
Dec 10, 2021 01:42:58.073 [0x7f2943b30b38] WARN - Took too long (0.110000 seconds) to start a transaction on ../Statistics/StatisticsResources.cpp:18
Dec 10, 2021 01:42:58.191 [0x7f2943b30b38] WARN - Transaction that was running was started on thread 0x7f294399cb38 at ../Statistics/StatisticsResources.cpp:18
Dec 10, 2021 01:43:44.455 [0x7f2943afab38] WARN - Held transaction for too long (../Statistics/StatisticsManager.cpp:249): 1.630000 seconds
Dec 10, 2021 01:43:55.104 [0x7f294516db38] WARN - Failed to set up two way stream, caught exception: write: protocol is shutdown
Dec 10, 2021 01:44:39.261 [0x7f2943afab38] WARN - Held transaction for too long (../Statistics/StatisticsManager.cpp:252): 1.060000 seconds
Dec 10, 2021 01:49:36.705 [0x7f2944b0bb38] ERROR - [EventSourceClient/pubsub] Retrying in 15 seconds.
Dec 10, 2021 01:54:48.674 [0x7f2944b0bb38] ERROR - [EventSourceClient/pubsub] Retrying in 30 seconds.
Dec 10, 2021 01:55:28.264 [0x7f29439e2b38] WARN - Took too long (0.120000 seconds) to start a transaction on ../Statistics/StatisticsManager.cpp:249
Dec 10, 2021 01:55:29.370 [0x7f29439e2b38] WARN - Transaction that was running was started on thread 0x7f29439e2b38 at ../Statistics/StatisticsManager.cpp:252
Dec 10, 2021 01:56:27.950 [0x7f2943b30b38] WARN - SLOW QUERY: It took 39870.000000 ms to retrieve 8 items.
Dec 10, 2021 01:56:32.218 [0x7f2943b30b38] INFO - It's been 21871 seconds, so we're starting scheduled library update for section 3 (Serien)

After much troubleshooting I’ve discovered I’m having the same problem on my TrueNAS Plex install. Have you figured out how to fix this? It seems at least disabling intro detection has stopped the crashes for now.

@ChuckPa @Joshua

Hi Joshua,
Yes and No. I’ve worked around it be moving PMS into a docker container in container station, where I’ve the possibility to hard limit CPU and Memory. In normal behavior PMS uses less then 500MB of RAM. I limited the container to 4GB which should be more than enough for my tiny movie and TV shows collection.
Another benefit to containerize it is I have much better monitoring possibilities. So I can see exactly WHEN it happens and can correspond PMS Logs.

Now Plex crashes, but my Homeserver ist still alive.

But here I need more support. I figured out, PMS seems to me always crashes at the same point in my library.

The last thing in log I can see is this here:

...
Feb 07, 2022 05:46:13.754 [0x7fbf16cf4b38] DEBUG - The butler generated chapter thumbnails for 0 files
Feb 07, 2022 05:46:13.754 [0x7fbf16cf4b38] DEBUG - Activity: registered new sub-activity 8bdcf6f8-0208-452d-b5d2-17ad04238200 - "ButlerTaskGenerateIntroMarkers" parent: 83170843-2b96-4963-8a83-9d204c73c0ad overall progress: 93.8% (15/16)
Feb 07, 2022 05:46:13.755 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 93.8% - Butler tasks
Feb 07, 2022 05:46:13.755 [0x7fbf16cf4b38] DEBUG - Butler: Scheduling intro marker creation for: 2733
Feb 07, 2022 05:46:13.756 [0x7fbf16cf4b38] DEBUG - IntroDetector: Running intro detection for [2733] [Game of Thrones: Das Lied von Eis und Feuer] [2]
Feb 07, 2022 05:46:13.757 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2734)
Feb 07, 2022 05:46:13.776 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 94.2% - Butler tasks
Feb 07, 2022 05:46:13.776 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2735)
Feb 07, 2022 05:46:13.799 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 94.8% - Butler tasks
Feb 07, 2022 05:46:13.799 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2736)
Feb 07, 2022 05:46:13.823 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 95.3% - Butler tasks
Feb 07, 2022 05:46:13.824 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2737)
Feb 07, 2022 05:46:13.833 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 95.8% - Butler tasks
Feb 07, 2022 05:46:13.833 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2738)
Feb 07, 2022 05:46:13.842 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 96.3% - Butler tasks
Feb 07, 2022 05:46:13.842 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2739)
Feb 07, 2022 05:46:13.850 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 96.9% - Butler tasks
Feb 07, 2022 05:46:13.850 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2740)
Feb 07, 2022 05:46:13.858 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 97.4% - Butler tasks
Feb 07, 2022 05:46:13.858 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2741)
Feb 07, 2022 05:46:13.867 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 97.9% - Butler tasks
Feb 07, 2022 05:46:13.867 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2742)
Feb 07, 2022 05:46:13.913 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 98.5% - Butler tasks
Feb 07, 2022 05:46:13.913 [0x7fbf16cf4b38] DEBUG - IntroDetector: Initializing for "" (2743)
Feb 07, 2022 05:46:13.923 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 99.0% - Butler tasks
Feb 07, 2022 05:46:13.923 [0x7fbf16cf4b38] DEBUG - Activity: updated activity 83170843-2b96-4963-8a83-9d204c73c0ad - completed 99.1% - Butler tasks
Feb 07, 2022 05:46:16.614 [0x7fbf17b2cb38] DEBUG - Request: [127.0.0.1:47182 (Loopback)] GET /identity (4 live) Signed-in
Feb 07, 2022 05:46:16.614 [0x7fbf16c9ab38] DEBUG - Completed: [127.0.0.1:47182] 200 GET /identity (4 live) 0ms 398 bytes (pipelined: 1)
Feb 07, 2022 05:46:21.679 [0x7fbf17b2cb38] DEBUG - Request: [127.0.0.1:47222 (Loopback)] GET /identity (4 live) Signed-in
Feb 07, 2022 05:46:21.679 [0x7fbf16c9ab38] DEBUG - Completed: [127.0.0.1:47222] 200 GET /identity (4 live) 0ms 398 bytes (pipelined: 1)
Feb 07, 2022 05:46:26.739 [0x7fbf17b2cb38] DEBUG - Request: [127.0.0.1:47234 (Loopback)] GET /identity (4 live) Signed-in
...

This is the last thing regarding maintenance job I can see in log. Always after this Task for “[2733] [Game of Thrones: Das Lied von Eis und Feuer] [2]”

After that PMS main process insanely increases in RAM usage from some hundred MB up and up til it crashes.

Update:
I think I got it.
It was really my GoT-Series Folder which was a little bit crappy. Sometimes with duplicate episodes in different qualities from different sources. Maybe one or more of these files are corrupted. I moved this GoT Folder out of Plex Lib and my troubles are gone.
I’ll retranscode with handbrake and reimport them. Afterwards we will see if this issue comes back.

The interessting thing in my oppinion: Why get’s PMS Main process a memory leak, if transcoder worker run into issues?!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.