I am running the RPM version of Plex on openSUSE linux. The machine has IP Address “X” and is one of three servers in my account (all three are in the same location, some subtle differences in OS and on a couple of different IP Subnets but can communicate with each other).
I want to spin up a new virtual guest of openSUSE Linux and move everything about the Plex instance to that machine (including the IP Address). I want to re-IP the original server with a new address and LEAVE all of the media files AND the Plex install directory and share them out via NFS. I will have the new server mount each of these in the correct locations (all the same as the original) and run Plex from there.
I tried this the other day and it not only failed miserably (the new server simply says “not authorized” when I connect to it with a web browser and it redirects me to one of my other servers), it seemed to somehow corrupt items within my original Plex directory and I had to clear a lot of cached info out to get the original server back online.
I’m not interested in performing a new install because we’re past the end of most of the episodic shows’ seasons and re-creating the DVR recording schedule won’t be possible as a result.
The instructions in the article are not complete for this type of migration and leave a lot to be desired… So, I’m hoping that someone here might know more details about how to make this work.
Thanks. Your final comment is the one that tells me the most - “don’t expect it to work.”
For the other pieces, I appreciate the thoughts, “but”…
CentOS is dead. The last release of it was six years ago.
Fedora is 100% community-driven and takes little to nothing from the actual RedHat product it was originally based on. While openSUSE is also a community-driven product, it can and does bring aspects of correlating versions of SLE into the mix and provides content back in the way that the Professional version of SUSE always did. It is inherently more stable and functional than most other linux derivatives as a result even though it is on a constant release cycle.
NFS has been an integral part of my Plex environment from the very start - I have been using it to share large disks of content to multiple places including my Plex servers for years and have been storing various pieces of the media that way from the start. This would be the first time I would be attempting to use NFS as a method to access the main Plex directory and the tip you are referring to lacks explicit, direct detail on how to do it. For example, it tells you to add flags to the mount options “if your server does not support NFSv4”, but it never actually states that NFSv4 is the recommended option. So, is v3 perfectly fine to use so long as you add the mount tags? This is totally unclear.
Similar to below, I’m also not “moving” the main server directory - I’m only using a reference to content stored on a remote drive as opposed to a local one. While I generally am not opposed to keeping this on a locally mounted drive / partition, my overall concern is that the disk space requirements are overbearing for a local, virtual disk that I have to store locally on a much smaller SSD that shares space with other local VHD’s.
I’m not “moving” the media as far as Plex is concerned. The folder location will remain as “/mnt/movies” with appropriate subdirectory structure. The difference is that the content will live on a remote drive mounted to that location as opposed to a local drive mounted to that point.
I supposed that I’ll have to do a complete migration and use a secondary VHD to store the Plex software while continuing to use NFS for my media. I’ll just have to write more scripted utils to keep an eye on things like disk space usage that can send me alerts…
Come on Black Friday! Papa needs a discounted Emby Premiere lifetime subscription!
the only thing you actually need as a minimum for a migration is the databases.
the metadata can be regenerated any time, but does obviously take some time and disk/network io to do.
the preferences.xml file holds the server GUID, friendly name, and server preferences, which may be useful as well, but not required.
plex does not support multiple server instances with the same guid, but you can still essentially clone one pms into another, removing the existing GUID and signing in the new server to your account to manage it with pretty much everything but shares/users in tact.
storing the plex data folder hierarchy across the network by any means is unsupported, and while it may work, network issues can quickly cause corruption in the databases. Create/keep backups often if you go that route.
additionally, keep in mind that plex uses sqlite databases, which are file based, so using any non-local disk adds an additional latency and performance deficit. This applies to any sqlite based application. which includes emby.
as far as media, it is fine to use NFS or SMB etc, since most operations are read only. DVR recording does need write access, and any remote storage could also be affected by network issues while reading or writing.
for example, many people use a nas for only media storage, with the server itself on a different/more powerful device.
If you want the new server to believe it’s the same server, and for clients to not notice that you’ve made this change, you’ll want those.
I’ll be way more emphatic about that!
Unless your NFS implementation is absolutely rock solid, don’t run SQLite on NFS. For Plex that means not putting the Plex Database directories on NFS.
Even if it works, I would expect it to reduce performance significantly.
I suspect you could put the Media and/or Cache directories on NFS. Still not supported, but … I bet it would work.
Test that file locking works with locktests. But still … just … avoid it.
The “how to migrate to a new host” article just doesn’t have the explicit level of detail that I would (personally) want to see. And, when the migration didn’t work after following the doc, there was zero ideas of where to look / what to do as a result.
Ultimately, the part that was a 100% blocker was that I could not access the server after completing the migration (according to the doc) to ensure it was signed it correctly. There was absolutely no way that I could find to force my web connection to that server so that I could force a sign-in and make it part of the entire setup. More concerning was the fact that the server was completely missing from my environment all together - it no longer showed as a server in the list.
I understand and agree that storing information on a remote drive where write operations occur has risk. And, I can see where NFSv4 may be a suggested route because it’s TCP-based (stateful) and affords at least some level of opportunity to ensure disk writes occur and are committed. My home network is not trivial… But, it has been built to be high-performing, redundant, and resilient. The machines are all on UPS’es and intercommunicate with each other to proactively shut down items that require access across the network when issues are detected. For example: If my NAS detects that the UPS has switched to battery power, it will update a “virtual device” on my Home Automation controller to indicate that it is on battery. Other servers that use the NAS will periodically query the state of that virtual device and shut down processes and unmount file systems if a power failure is detected. So, the risk of corruption has been somewhat lessened.
If your main concern is plex data size, make sure video preview thumbs are disabled.
If you have such a large library that it’s used a lot of space anyway, then you simply need to bite the bullet and go with a dedicated physical ssd for plex data.
Disk consumption is a primary concern. Moving the server to a virtualized guest will need to be done in a way that allows that guest OS to move around to different virtualization hosts. Smaller disks make that more feasible, and dedicated local storage on the virtualization host make it impossible. In the past, I have run guest OS machines using a VHD stored on an NFS-attached storage repository (which is required in some guest configuration scenarios) without issue. Creating snapshots of the disk on a schedule is easy enough to do, even if those disks are remote to the host. That might be the way to achieve what I need, but I will need to ensure that the overall OS VHD is kept as small as reasonably possible.
Isn’t that somewhat just a case of semantics, though? If the concern is that NFS can cause corruption, does it really matter if it’s being used to mount a remote drive to a local folder or if the entire virtual disk is stored on a remote drive? If NFS ‘craps out’ with a NFS-mounted share, data on that share could get corrupted. If that happens instead to a share where the entire OS VHD is stored, then the entire VM could become corrupted and unusable. No?
If there’s a complete failure, both scenarios will fail to read or write data. That’s the easy case.
With an OS writing to a virtual hard drive, the OS gets to use local block semantics for locking and reading and writing to ranges within the file. Those are old and reliable and well understood - the exact same as writing to a real hard drive. The virtualization software orders these writes and reliably persists the changed blocks to disk - over NFS or iSCSI or FCoE or fabric-of-the-hour. This layer is also known to be reliable.
With an OS writing directly to an NFS mount, the semantics change. The block semantics change, and locking becomes much more complicated - the client and server have to agree about which blocks are locked, unlocked, and the order they are written.
It’s not impossible to make work, and the link above gives one good suggestion.
But historically, putting databases directly on NFS has been a good way to lose data. Different NFS implementations haven’t played perfectly together.
I wouldn’t hesitate to put a VHD/VMDK on NFS. And then whatever I needed on top of that.
I would hesitate - and test, and look for alternatives - before putting a read/write database directly on NFS.
You can share out your media files / transcode directory via NFS without issue but the Plex install directory and the accompanying configuration directory should stay on it’s respective virtual machine.
You can use your existing configuration with the new Plex install if you’re moving your Plex to another server. Just copy the configuration directory to the new server. It doesn’t necessarily matter if it’s the same IP address or not. As long as it’s pulling from the existing configuration nothing will know any the wiser.
In certain circumstances you could put your Plex configuration on NFS but you can not share it to two different instances of Plex. You will also run into troubles with the configuration directory over NFS if you plan to use Live TV & DVR. You’ll end up having trouble setting up your tuners correctly.
Ok… So, it seems that I could fairly reliably do the following:
Create a VHD for the OS and place it “wherever”
Create a VHD for the Plex app directory and store it “wherever”
Mount the second VHD on the /var/lib/plexmediaserver directory
Run Plex
This would afford me the ability to the keep Plex installation completely separate from the OS (which offers me a lot of flexibility with how I could manage OS updated in the future as I could literally pre-build a brand new server, shut down the old, bring up the new, mount the Plex drive, and back in operation within a few minutes) while also maintaining the flexibility that I need for the OS to be virtualized and be flexible within the Host Pool.
My interest in maintaining the IP Address is because A) the original server will move into an entirely different role and I use address ranges to depict primary server roles, B) the Android TV client for Plex is one of the worst applications I’ve had the displeasure of using - it’s my client app throughout the house and I’m simply not interested in having to much with it because it can’t find my server any more, and C) if/when I allow remote access to my servers, I do with enabling static rules that persist in my firewall so maintaining the IP for Plex makes sense in my setup.
I have no interest in trying to share the same configuration across multiple Plex servers. That wasn’t something I was asking and hopefully that didn’t come through with any prominence from any of the asks I’ve had. This is entirely about migrating off of one box and onto another for a single server that is part of a multi-server setup.
Just run docker/podman and be done with it. Keep it simple. IMO.
Unless you can elaborate on why you would spin up a VM to run Plex but only to leave the Plex install on the original server but share it to the new server? This makes no sense. Install Plex on the new server and copy the configuration files from the original server to their new location. Done.
I’m not entirely understanding the logic behind your methods because there are easier more effective ways to go about this. That are already industry standard and widely accepted. eg: see Plex docker container.