Memory leaks on 1.15.6 running on a QNAP, Plex stops responding after it has consumed all my available RAM (32GB)

Server Version#: 1.15.6.1079Plex Media Server Logs_2019-05-23_08-07-54.zip (3.8 MB)

NAS: QNAP TVS-1282T (32GB)

Been having some unstability issues the last few days, having to reboot the whole NAS to make Plex reachable again (usually just stopping the app and starting it again works). Tried stopping the app now after it was not reachable, and noticed that the Plex process is using almost all the RAM on the NAS, see screenshots. This was after I stopped Plex, prior to reboot. Even with the Plex app stopped there are several processes eating alot of RAM

The tricky part is noticing when it happens, both Tautulli and uptimerobot fails to report downtime since Plex is still listening on 32400. Any help would be great :slight_smile:

These are screenshots from the NAS when Plex is not responding.

For comparison this is a screenshot when the NAS is just rebooted (notice the difference in memory consumed by Plex).

New crash just a minute ago. This time the RAM usage was normal, and I could restart Plex just by stopping the app in the gui and starting it again (logs attached).

Last few weeks I’ve been lucky if the server has been up without crashing for 48 hours, getting kinda frustrating.Plex Media Server Logs_2019-05-24_06-58-47.zip (4.4 MB)

And another crash, just noticed the server was down :frowning: No RAM issue this time either. Had to reboot the whole NAS, stopping / starting Plex did not help.

Logs:
Plex Media Server Logs_2019-05-26_13-38-09.zip (3.8 MB)

You have network issues

From the log:

May 26, 2019 13:27:56.161 [0x7fbd8c5bc700] DEBUG - NetworkInterface: Notified of network changed (force=0)
May 26, 2019 13:27:56.161 [0x7fbd8c5bc700] DEBUG - Network change notification but nothing changed.
May 26, 2019 13:27:56.161 [0x7fbd8c5bc700] DEBUG - NetworkInterface: received Netlink message len=1312, type=RTM_NEWLINK, flags=0x0
May 26, 2019 13:27:56.161 [0x7fbd8c5bc700] DEBUG - NetworkInterface: Netlink information message family=0, type=1, index=24, flags=0x1003, change=0x0
May 26, 2019 13:27:56.161 [0x7fbd8c5bc700] DEBUG - Network change.

Above is repeated over and over…

  • Are you using a static ip address?
  • Have you set your Preferred network interface in PMS Settings/Network ?

Thank you for the reply. Yes, I am running static ip (192.168.0.105), but also running alot of docker stuff on the nas, could that screw things up? Preferred network interface was set to “any”, but i tried putting it to br0 now. Would that explain the huge RAM leak in the first logs too? :slight_smile:
network

Sadly no idea what can happen to a transcode session, if network is lost, but possible…

All I for now can suggest, is that you watch out for network issues, since I suspect this to be the origin of your problems

Alright, thank you again :slight_smile: Will update this thread in a while if this results in more stability :slight_smile:

1 Like

Upgraded to 1.15.8 a few days ago, still crashing several times a day. Tried setting the NIC to dedicated like you suggested, that didn’t help (think it made it even worse). Tried setting it back to any, crashed again. Three crashes just today.

I have another NAS I could migrate all the container and vpn stuff to, so that this NAS only runs Plex. Would that be an idea? If you could skim through the attached logs that would be great.Plex Media Server Logs_2019-06-03_22-00-13.zip (4.7 MB)

I’ll rather see a screendump of your network settings in QNAP, and do note, that I’m about to go on a small vacation, but I’m sure @ChuckPa can assist further here

I would like to ask,

  1. Which adapter is the system’s default?
  2. Is DLNA enabled?
1 Like

Thanks to both of you, really appreicate it! DLNA is not enabled.

Screen from Plex network settings:

From the QNAP network settings:



No idea why the Thunderbolt adapter is default, I have never connected anything thunderbolt to the NAS.

That’s a complex config. It also looks like it could go for a serious cleanup.

VS 2 definitely looks out of place in this config.

Below is mine. 18 Virtual adapters in the VMs. 2 container station subnets shared between 3 containers.

Working back through the configuration,

If PMS is running as the native app (not in a container or VM), the adapter to use is br0 which is also the default gateway address of the machine itself.

Yeah, thats how it runs too. (Br0 and native app). It’s so unstable I,ve had to schedule a reboot of the NAS every night at 05:00 but still I have a crash or three during the following day / night.

Yeah cleanup would probably be a good idea. Ill see if i can do it today :slight_smile:

Once you get it cleaned up to look a bit more like mine (the disconnected adapters shouldn’t be showing like they are, etc),

You’ll be able to have the nice clean path coming out from the bond adapter (br0) -> Vswitch -> the physical adapter.

Thanks, I’ll try that once the kid and the missus have left the house in a couple of hours :slight_smile:

So I deleted both container station and QVPN, and installed them again after a reboot. Cleaned up the virtual switches a bit. Reinstalled Container station and I think I’m gonna move the VPN from the NAS to my router. Looks cleaner now? Also noticed that the br0 interface is gone (probably an old bridged adapter back when I had teamed two cards) and Plex is now reporting eth5.

Hopefully this will help a little bit!


So are you having the issue still or not post cleanup?

Another suggestion if you are still having the issue would be move your NW connection to a 1GbE port (Adapter 1) and see if that resolves the issue. I also would get rid of the auto for default gateway since you only have 1 path to your NW anyway.

It could be a weird driver glitch related to the “smart offload” function of the 10GbE card.

So far it’s looking good! A bit early to say for sure how good, but not a single crash yet.
I’ll keep monitoring and post here when some more time has passed :grinning: tried deleting the Thunderbolt stuff from the vswitch, not possible. 1Gbit is not a good option for me, my infrastructure is 10Gbit all over and I move a lot of data internally :slight_smile: