Plex binding to random UDP ports

Server Version#: 1.21.3.4021-5a0a3e4b2
Ubuntu 20.04 LTS

I’m noticing recently that PMS seems to be bound to random UDP ports in the 30000-60000 range, which are seemingly either making outwards broadcast requests or issuing it as a port to players for playback - but they aren’t documented on the website and I can’t see a way of restricting this to a port range I could allow through my firewall. As a result it’s triggering alerts on my network intrusion detection regularly, which is very annoying. This doesn’t seem to be a player issue, as it’s happening from multiple network devices (e.g. my router, Sonos devices) that have never been connected to Plex.

Happy to provide logs etc if it helps, has anyone else seen this behaviour recently?

That’s the nature of the IP stack.

Outbound ports are selected at random by the TCP/IP & UDP/IP driver.

If you want to bind a source port, you must make the specific request when the socket is opened and the addr/port bound to that socket.

SRC port = don’t care
DEST port = Do Care

Thanks for responding, but that’s an overly simplistic way to describe this behaviour and is sort of ignoring the problem - people don’t usually explicitly allow in corresponding inbound traffic for outbound requests, they’re treated as related or established by iptables. That is not what’s happening here - Plex is binding to a random port (well actually, lots of them), which I assume are making outbound requests to devices, which in turn try to talk back to Plex on that port, but can’t as the traffic isn’t seen as related to any outbound requests and therefore can’t get through iptables. I can’t account for the traffic explicitly as there’s no way for me to choose this port. I’ve been running Plex on this server for years and the behaviour’s only started recently.

Allowing inbound REPLIES to outbound SEND is the norm.

All of Plex’s OFF-LAN traffic is TCP. Nothing is UDP…

The only thing required for Remote Access is inbound 32400/TCP port-forwarded from some User-assigned or random UPNP-assigned TCP port

It does sound like you changed something inadvertently . Those are a PITA to find and 2 seconds to fix (been there - done that myself)

Can you show me, in the netstat output, what you’re seeing?

Output of netstat -anpu | grep -i plex is attached.
plex.txt (987 Bytes)

The ports keep changing, but taking the example of 58639, that’s caused ~4,000 dropped packets in the last hour and a half, earlier today it was using port 45276 which caused 135,344 dropped packets over the course of the day (bearing in mind Plex wouldn’t have been used for most of the day!).

Those are the LAN ports.

They are PMS listening for the on-LAN players to talk to it.

Some of those ports are used by Plex Media Server to discover clients, if GDM is enabled. As a test, head to Settings -> Network and uncheck “Enable local network discover (GDM).” Then restart Plex Media Server and check your netstat results again. This should reduce the number of UDP port bindings (on my server it leaves two, 1901 and a high port) and some traffic.

The remaining high port will still generate an outbound multicast packets (to 239.255.255.250 1900) every ten seconds or so; this cannot be disabled in Plex. It is used for discovery of other network resources. The best you can do is firewall this traffic on the server itself. However, it is multicast traffic and shouldn’t be crossing network boundaries without some help.

Thanks @philipsw, that matches the behaviour I’m seeing and is pretty much exactly what I’ve done to date - GDM and DLNA were already disabled which in my case still leaves 3 high ports bound, so I ended up adding a rule to drop this traffic from Plex;

num   pkts bytes target     prot opt in     out     source               destination
1       26  3354 DROP       udp  --  *      enp3s0  0.0.0.0/0            239.255.255.250      udp dpt:1900 owner UID match 119 /* Prevent UDP broadcast from Plex (can't be disabled) */

I was kind of hoping I’d be able to either disable the broadcast (as it’s not really benefitting anything if the responses aren’t being returned), or at least restrict the binding to a port range of e.g. 1000 ports that I could then allow through the firewall, as right now it seems to be anything 30,000+ - but good to know I’m not alone in seeing this behaviour and that there’s no way to disable it, so I’ll stop looking for one!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.