Thank you for taking the time to spin that up. That was a lot of work.
I greatly appreciate it.
This confines it to the minupnpd used in the devices.
I can only hope Engineering figured out what is happening
Thank you for taking the time to spin that up. That was a lot of work.
I greatly appreciate it.
This confines it to the minupnpd used in the devices.
I can only hope Engineering figured out what is happening
FYI, still not fixed in 1.15.1.791 
I can also verify this is occurring with Amplifiās implementation of UPnP (which makes sense as Amplifi is Ubiquitiās consumer brand).
Something that might help narrow down the specific issue, this issue was not occurring on 1.15 prior to the most recent Amplifi software update (v2.9.5). Perhaps if Plex were to reach out to them, theyād be able to say what changed in the UPnP implementation between previous and current firmware?
Oddly enough, hereās where it gets really weird. The issue was occurring on 1.15.2.793 and my most recent router update, but after a day or so of it acting up, without any version changes or anything it seems to have settled down and isnāt pinning a thread anymore.
Not seeing that on my system, still 100% utilisation of one core with 1.15.2.793 and UPnP enabled on my router 
Hi all!
Iām experiencing very similar issue - latest Plex Media Server 1.15.2.793, running Windows 10, constantly utilizes ~20% CPU when miniupnpd + upnp is turned on. My router is running the latest OpenWRT as well. Iāve faced already with this issue at Linux before, though canāt provide more details cause ended with disabled upnp in setup with linux.
As well, when I left only NAT-PMP, CPU utilization went to ~0% from ~25%. Hope, this issue will be sorted out. Iām open to provide some traffic sniffing if this can help for tracking root cause.
It is the miniupnpd in the DD-WRT software colliding with PMS.
Engineering is aware but not much they can do with it.
The problem doesnāt exist on pfSense or other major firmwares.
Miniupnpd is not responding per spec.
Until engineering figures out a workaround, Stop using uPNP (Plug & Pray) with Plex.
Manually forward your ports.
I am not entirely sure itās miniupnpdās fault. There were clearly many versions of Plex working with miniupnpd over the span of many years and then a Plex update happened that introduced the issue. Maybe the code commits between those two versions should be examined closer to get a clue why this is causing 100% CPU usage.
Also, even if something isnāt responding to spec, it shouldnāt cause your software to suddenly go to 100% CPU usage. Like all server software, you need to handle bad input gracefully.
The problem here is that even if we set Plex to connect via a specific port thatās been forwarded, it still takes up 100% CPU usage simply because miniupnpd is running on our routers. It forces users to have to disable UPnP entirely, which is undesirable in many cases. Torrents require it, certain games use it to establish P2P multiplayer connections, etc.
If it was simply a matter of disabling UPnP support in Plex then Iād accept that as a valid workaround, but currently we have to choose between running Plex vs running all of our other stuff. If we run both, then we risk sky high electricity bills and thatās if our Plex servers donāt burn our houses down first with the heat from our CPUs.
This bug should be escalated. With any other server software out there in the world, a bug that causes 100% CPU usage due to bad input would be marked as a HUGE regression and potential release blocker.
If itās a matter of testing and reproducibility, there are a bunch of users in this thread, myself included, who I am sure would be willing to test debug builds for you. I can reliably reproduce this issue on all of my Plex servers and I can spin up test VMs at any time. Let us know if we can help, because weāre all awaiting fixes for this urgent issue.
Very well put @urbenlegend, I couldnāt agree more.
Plex was coexisting quite happily with miniupnpd right up until 1.14.1.5488. A change was obviously made in that release that needs to be examined and/or regressed rather than telling us to reduce the functionality of our networks and other services by disabling UPnP.
The fault is with PMS and Plex need to fix it.
If I could fix it, I would.
It was forwarded to engineering.
Now everyone waits.
Please donāt shoot the messenger anymore and thereās no need to keep piling on.
I am stepping back from this topic until I receive notification there has been movement on the issue.
Thanks for the reply @ChuckPa, and thanks again for passing it on to engineering. Any feedback you get from them, or requests for more information, will be eagerly received and acted upon by ourselves.
Iām not clear exactly what āTeam Memberā signifies, if youāre employed by Plex or a volunteer moderator/advisor on the forums, but I want to express the following to you in any case:
Bearing in mind your last post on this thread I donāt expect a response to the above, and thatās fair, but please consider the above and keep us up to date if thereās any progress from engineering. Iāll continue to test the beta releases as theyāre issued and report back to this thread with my findings until engineering fix the bug. Hopefully that will save other users the bother of switching between versions to test it, and those that have stayed on 5470 can keep that in place until a working PMS is released.
I am part time with Plex.
I was attempting to express that, because I am also a developer, if I could replicate the issue with my pfsense or if I had access to one of those routers so I may PCAP it, I would then go into the code and start tracking it down and fix it.
I was not attempting to be dismissive in any way. My statement is meant in earnest. āI would fix it if I couldā in conjunction with āI have no problem on my pfsenseā (lesser importance but also fact which prevents me from finding the fault).
Perhaps this makes things more clear? Iām not dismissing the problem. I would love to find out where it regressed and solve it but, until I can get ethernet captures (wireshark) or have a device in hand to debug, I am powerless to do so.
@ChuckPa Is there any possible way for us to dump wireshark captures for you? Would that be helpful? Also, there are some cheap OpenWRT capable routers out there (Archer C7 comes to mind) that are easy to flash. Maybe convince Plex corporate to budget in such a device? It could be useful to have on hand just for testing purposes.
And @ChuckPa thanks again for all the communication. I just want to make sure that you know that weāre in NO WAY attempting to shoot the messenger and we appreciate the info youāve given us so far. @gary_parker and I are just trying to explain the extent this bug is affecting our networks and why the workarounds youāve given us donāt quite work. Weāre also just concerned because the phrase āIāve passed this to engineeringā is commonly used by corporate entities to dismiss issues that they donāt deem important enough to solve right away. It also tells us nothing about whatās being done about the issue, when the issue will be fixed, etc. These are all information that normally would be surfaced by a bug tracker but youāre our only conduit so weāre communicating through you.
Like if there was a bug tracker, weād have marked it with āMajorā importance. Weād be listing steps to reproduce, dumping debug logs, seeing developers working on it and changing the status, etc. We would know that its actively being worked on or not. But weāre not getting any of that which is why weāre sending so many messages here.
Plexās bug tracker (which really does exist) is filled with both bugs from releases as well as problems found while developing new things.
Everything is so interwoven, itās not possible to differentiate which is public and which isnāt.
Having a āpublic visibleā bugs and āinternal bugsā is, by its nature, problematic.
Weāve wrestled with it internally and not come up with anything better than how we do it here.
To give you an indication, invisible to you, we have tags we attach to threads. One simple query of the forum and everything new / hot / unreported / whatever the query might be shows up in the result.
This thread is so tagged.
Regarding how best to capture, I can provide that here.
From here, you can scroll down and see all the traffic to/from the modem.
When you see the SSDP packets, click them to expose their text in the window below. See whatās in the packets.
Now go back to the filter bar and apply an SSDP (by starting to type then selecting ssdp) and adding it.
The display updates to show SSDP to your routerās IP.
Every packet from the Source of your computer is what you want to look at.
Itās impossible for me to say from where (given I canāt reproduce any error) what youāll see but something should stand out either by repetition or some overt error message/code most likely)
If youāve got something in there which is clearly wrong / repeating, SAVE the file as the .pcapng
@ChuckPa, just did what you asked:
There are SSDP messages going through the network but theyāre not continuously hammering by any means. Instead they come in bursts of 10 or so packets and then pause for a while (a good 20-30 seconds) and then thereās another burst. I even let the capture go on for a while after Plex started pegging the CPU to see if there was a flood of SSDP messages but that wasnāt the case. All the packets show either HTTP/1.1 200 OK or NOTIFY * HTTP/1.1. This seems like normal traffic to me, but then again I havenāt thoroughly examined each packet yet as I am not really sure what to look for. 192.168.1.1 is my router and 192.168.1.212 is my Plex server.
Thank you. That tells me a lot.
The first part was to determine if it was hammering the LAN trying to do something or not.
With this info, it points to internal to PMS.
If anyone else can replicate this same, proper, āburst - sleepā pattern (which is correct), that would be great.
Also, I would like to ask you repeat this on current 1.15.2.793. By doing so, it can be shown as āCurrent statusā and not a holdover from 1.14.1.
I am on 1.15.2.793. Iāve been keeping up to date with all the latest plex beta builds.
Iāve just run the same procedure here but with the difference that Iām also observing SSDP IPv6 traffic as my LAN is fully IPv6 enabled. Plex has IPv6 disabled in the Network section, however.
I started the PMS server and, after an initial burst of processor activity from a number of Plex processes, I saw the plex media server process go up to around 100% cpu utilisation. I stopped the packet capture (and PMS) after leaving it in this state for a few minutes and examined the output in wireshark.
I can see the router doing SSDP NOTIFYs and OKs. I see the server IP that PMS is running on doing multicast SSDP searches every now and again (obviously canāt easily tell in wireshark what process is generating this traffic, but Iāve stopped as many services on the server as I can) and receiving a few OKs from my Hue Bridge. Thereās also quite a bit of multicast SSDP traffic from assorted Sony wireless speakers around the house.
I then uninstalled PMS 1.15.2.793 and reinstalled 1.14.0.5470 and followed the same steps. Thereās no discernible difference in the traffic patterns Iām seeing but the high CPU utilisation is not observed.
FWIW, I also ran up a 1.15.2.793 server on my MacBook Pro and observed the same behaviour when restarting minupnd with upnp enabled. Regressing PMS on the MacBook Pro to 1.14.0.5470 and running minupnpd with UPnP enabled did not produce the high CPU behaviour in the plex media server process. Wireshark dumps during this testing showed similar results: a mix of multicast discovers, notifys and OKs.
FYI, your colleague @sa2000 was investigating the issue as presented on macOS in this thread and seems to have come to the same conclusion that thereās nothing obviously wrong in the network traffic.
Plex doesnāt yet handle IPv6 so thatās an interesting development.
I have IPv4 only and an IPv4-only ISP
I have the Hue, my Onkyo, and several other devices. Those all come up ok.
Since Plex doesnāt yet speak IPv6, asking the simplistic question: Could an IPv6-only device be the cause ?