Plex server unreachable through account after migration — front-end proxy setup

Server Version#: 1.42.2.10122-3749b980e

I’m hoping someone can help me make sense of a strange reachability issue.

I recently migrated my Plex Media Server from one machine (“Server A”) to another (“Server B”). The goal was to have Server B host all media and run the main PMS process inside a network namespace that routes through a tunnel, while Server A acts as the front-end reverse proxy with the public domain and certificate.

In other words, Server B has no internet-accessible IP address at all — it’s reachable only over a private wireguard link (10.9.0.x). Server A terminates HTTPS at pms.mydomain.com and proxies traffic over HTTP to PMS on Server B.

When I first set everything up, it worked perfectly. Plex recognized the server, it appeared under my account, and clients connected through the domain without any problem. Then, a few days ago, the server abruptly became unreachable from the Plex Web app and from clients — even though:

  • The reverse proxy on Server A still works (I can curl https://pms.mydomain.com/?X-Plex-Token=… from outside and get a valid XML response).

  • Plex Media Server on Server B is running normally, libraries are intact, and it responds locally on port 32400.

  • Both machines can reach plex.tv and pubsub.plex.tv from the command line.

However, Plex.tv now marks the server as “unreachable,” and it never passes the internal reachability check during startup. The logs show a Published Mapping State response was 422 followed by “attempted a reachability check but we’re not yet online.”

I’ve tried:

  • Forcing customConnections=“https://pms.mydomain.com” and toggling secureConnections between 0 and 1.

  • Verifying that PublishServerOnPlexOnlineKey=“1” and that PlexOnlineToken is valid.

  • Testing connectivity from within the namespace (can reach both plex.tv and pubsub.plex.tv fine).

What’s puzzling is that if I start PMS directly on Server A (with the proxy pointing to localhost), it registers instantly and appears online in my account. Moving the exact same configuration and data directory to Server B makes it “unreachable” again, even though connectivity is identical from inside the namespace.

It feels as if Plex’s reachability test is somehow failing because the server isn’t bound to an internet-routable IP address, but I can’t confirm that.

Is there a reliable way to run PMS behind a reverse proxy like this, where only the proxy has the public address? Or some method to force Plex to treat the proxied HTTPS endpoint (pms.mydomain.com:443) as the canonical connection for reachability?

Thanks in advance for any insights — this one has me stumped.

1 Like

This is probably where it’s going sideways on you.

If I may propose a modified implementation?

  1. Public-facing server (A) runs PMS
  2. Private server (B) contains the media
  3. Wireguard runs between A & B servers
  4. NFS mount the media from B onto A via the wireguard tunnel.

In this configuration,

  1. The presence of server B is invisible in all regards. It maintains anonymity.
  2. Server B is not reachable from the outside.
  3. You can do whatever you want on server A (your proxy) to PMS.
  4. PMS is completely unaware of the NFS/wireguard connection to the media.
    It sees a “directory containing media files”

I have used this method for both local (LAN) and transcontinental server configurations.

If you look at the layering:

  1. Modem/router ↔ PMS (A) (your proxy on the PMS host only)
  2. PMS ↔ local host media directories
  3. local host media directories ↔ NFS client mount
  4. NFS client mount ↔ wireguard transport ↔ NFS server
  5. NFS server ↔ Media on server B

The only difference here from a normal PMS server ↔ Big NAS box is the use of the wireguard layer. ( I have several PMS servers which NFS mount the media)

If both of these machines are co-located, is wireguard really needed ?

Thanks for the considered response. I appreciate it.

I had considered doing what you recommend (running PMS on the public-facing server and mounting everything over NFS from Server B) and I’m probably going to move to that configuration eventually. The problem is: only Server B has a processor with QuickSync Video. That and the fact that other things run on it that I wanted to reverse proxy led me to run everything through Server A.

And the thing is: it worked for months. Until last week when it stopped working. Did something change that made it so that setups like mine where only the proxy has the public address are no longer going to work with PMS?

I can’t simply put PMS on Server A because it’s underpowered, so switching to the NFS model is a bit more complicated than simply moving the software. And I can’t make Server B the public-facing model because it has sensitive databases for my business on it.

It’d help me a lot to know whether my current proxy arrangement has been rendered impossible by some recent change at Plex.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.