Roku & Android phone showing connected as 'Indirect' on local network

Server Version#: 1.26.2.5797
Player Version#:

I’m troubleshooting an issue in Plex where streaming content to my phone & Roku shows in the Plex Dashboard as ‘Indirect’ (and thus is transcoding to a lower quality), but streaming to my gaming PC (all on the same network) shows as local.

I’m running Plex as a deployment in Rancher, using the linuxserver.io container, with a Traefik ingress. I also have CNAME in PiHole for ‘plex.jarvis.home’ to be an alias of the DNS record ‘traefik.jarvis.home’ which directs to 192.168.2.160. Rancher has an implementation-specific ingress set to direct plex.jarvis.home to the target service of ‘plex’ on port 32400.

The official Plex container documentation on Docker hub shows that if I’m running networking in Bridge mode (which I believe I have to for k3s), I need to add in an environment variable called ADVERTISE_IP="http://<hostIPAddress>:32400/" and then forward that port on my router. This is technically in order to access Plex from outside the network as “…Plex Media Server is essentially behind two routers and it cannot automatically setup port forwarding on its own.” but the same principle applies here to get around the double-NAT issue.

My question to you is, with my Traefik VIP set at 192.168.2.160, is that the IP that I would need to specify in the environment variable? Or do I need another service or something?

The host IP address in “ADVERTISE_IP” should be set to the IP address of the Docker host. This is then published, in a modified form, as the URL to which local clients should connect. What actually ends up getting published is a *.plex.direct URL which resolves to that IP address. More on this below.

If you need to specify additional custom URLs you can add them as “Custom server access URLs” in Settings → [server] → Network (Show Advanced). You can use IP addresses or FQDNs as the host portion of these URLs and Plex will do the right thing; just be sure to add a custom certificate if you use an FQDN.

Regarding the *.plex.direct URL I mentioned above, this may actually be contributing to your issue. For your local network connection, the plex.direct URL ends up looking something like this:

https://192-168-2-100.abcdefg1234567890.plex.direct

The string at the front is based on the IP address you advertise, delimited by hyphens instead of periods. Then your server’s certificate UUID, then plex.direct. This FQDN must be resolvable to a local IP address by your DNS server. If it is not, clients will fall back to attempting to connect via remote access; and if that doesn’t work, an indirect connection. This may be what you’re seeing.

If your DNS server has DNS rebinding protection enabled, see if you can add an exception for *.plex.direct. If you can’t add an exception, try disabling it altogether. For example, in my unbound configuration, I have the line:

private-domain: "plex.direct"

Setting the “ADVERTISE_IP” to the Rancher VM host of .122 results in no connection to the server. Setting it to the reverse proxy IP of .160 allows a connection but still gives me the indirect connection.

Adding in the .160 address to Custom server access URLs doesn’t change anything.

I use PiHole for local DNS, which doesn’t have the DNS rebinding feature.

Am I missing a certificate or something? Would setting up something like Let’s Encrypt fix the certificate issue? I already have cert-manager running in my cluster, should that be handing out certificates?

Conclusion: After fighting with this for a week, I decided to move to Jellyfin. Plex has been good to me, but I don’t need all the live TV offerings that Plex offers, plus Jellyfin is free and ultimately worked better for my environment, which is a k3s/Rancher deployment.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.