I am pretty much out of ideas now.
Do you have a firewall in your NAS as well? I don’t know the Syno’s but it might be possible.
If there is no firewall in your NAS, then there must be something else blocking access from external.
If your ISP is not putting you behind an additional NAT, the connection test with canyouseeme.org should be successful.
Side question: when you go there, does it show you your new public IP address? (the one you paid your ISP for)
It is my suspicion at this point, you have two LANs in your home (192.168.1.x and 192.168.86.x). PMS does not cross subnet boundaries without manual routing. UDP / SSDP broadcasts will never cross subnet boundaries.
Further, and again if my understanding is correct, flattening the LAN is the solution (1 subnet)
Flattening the LAN can be done multiple ways:
One subnet is 192.168.86.x / 24 (at the router)
One subnet is 192.168.87.x / 24 (at the Wifi)
The NAS however sees both as one LAN. (192.168.86.0 / 23)
This is a technique which can be used when the WiFi will not operate in AccessPoint mode. It still is routing.
Observe the bit pattern:
|85|1010101|
|--| --- |
|86|1010110|
|87|1010111|
The LSB of the network address is the 23 vs 24.
My personal preference would be to reduce this to 192.168.0.x and 192.168.1.x / 23 which achieves the same thing and then have a unmanaged switch through which the WiFi can draw data delivered from the internet gateway. This provides the means for the WiFi -> NAS traffic to remain within the switch.
If you cannot turn the Google in to a pure Access Point, then you must do as I’ve shown above.
I would prefer you use: 192.168.0.x for wired and 192.168.1.x for WiFi but it doesn’t matter as long as the numbers align.
This as basis:
Wired is the 192.168.86.x and retains its current netmask /24 (255.255.255.0)
WiFi becomes 192.168.87.x but retains its current netmask /24 (255.255.255.0)
This keeps the two subnets distinct.
The Synology becomes the one device which can span both subnets.
It has an IP in the 192.168.86.x subnet but has a netmask of /23 (255.255.254.0)
It can now see both 192.168.86.x and 192.168.87.x
To implement and test this: Confirm with me I have your existing LAN correct before changing
Change the LAN address of the WiFI to the 192.168.87.x subnet and push all new IP addresses to wireless devices (their DHCP addresses must be updated)
All devices on the LAN remain 192.168.86.x remain as they are
The Synology’s network adapter is assigned a static IP address on the 192.168.86.x LAN with a netmask of 255.255.254.0
How many of these Google WiFi APs do you have? I am guessing multiple so its not possible to bridge them with Google’s current firmware.
Put your NAS on the ISP’s TPLInk router and port forward there.
Place the primary Google WiFi Mesh AP router behind the TPLink. It’s WAN IP will be a RFC1918 IP assigned from the LAN space of the TPLink.
Make sure the LAN subnets are unique on both routers.
All your clients behind the Google WiFi Mesh will be using it’s WAN IP as its source IP as it passes through the Google NAT.
This should allow full internal switching and routing on your LAN even though you have cascaded NAT routers. As long as the NAS remains on the TPLink, you should only have to NAT once.
Additional port forwards for any device behind the Google WiFi Mesh would need to either be directly connected to the TPLink or use double NAT port forwarding.
Achilles and I have talked through this again.
We thrashed it back and forth both ways.
My methodology is only applicable if you have layer 3 switching and with the caveat the layer 3 switch has a subnet mask greater than both subnets (e.g. 23 and not the 24 of each subnet)
This said, Achilles’ implementation is best practice will work with proper port forwarding.
Hit a few roadblocks along the way… but it’s up and working.
Had to remove the Plex the media server preferences and the .pid file so that I could find the server again on my home network after moving it to the TPLink router.
Then I couldn’t “claim the server”, so searched around and found @ChuckPa advising another user about a similar issue at Can't claim Server
Now it looks to be all working, fully accessible outside the network.