In the IT world when you setup standalone server you spin up a VM with a C and D drive. The idea being that data related to the server will go to the D drive and if the system needs to be wiped and reinstalled, the C drive can be reinstalled without losing any of the application data.
Containers continue with this theme in that the container image is considered ephemeral, and that it could be replaced with a newer version at any time. This strategy requires that data be stored either using a database, such as a database running in its own container, or via a mounted volume.
After some time passed ācontainer managementā systems began to appear, such as kubernetes. Here exists infrastructures where all who understand the values of containers end up. Using kubernetes as an example (as I know most about that container management system) the idea exists of persistent volumes, which are allocated as needed via a provisioner. This is always a network share.
The PMS docker image has wording indicated that you use a network share for your configuration at your own risk, and that there are issues around file locking with many network share configurations. On pages around network storages here on the plex forum there are notes saying if you use such a solution you must ensure that the server itself is on the same network share as the storage solution.
Wanting to use kubernetes for plex, where I have all of my other containers (as container managements systems are pretty awesome), Iāve seen the issues which occur when you try to use a network share with plex⦠and its pretty tragic. This essentially requires me to spin up a vm just for this one program, which is a bit of a bummer (not to seem ungrateful, plex is awesome and iām happy to setup a vm just for it). There are many advantages to a container management system which are nice to have such as automatic certificate generation, easily roll up to the next version of an image or rollback if there was an issue, etcā¦
Iām curious of the internal design of plex such that it has difficulties supporting a network share with configuration settings without becoming corrupt or otherwise experiencing odd timeouts, and whether its on a roadmap somewhere to support such a setup (or if it is very unlikely to ever happen)?
Plex uses normal operating system calls to open, lock, read, write, and close files. If those are completely reliable, Plex will work fine with a network share.
However ā¦
Many network shares arenāt perfectly reliable. A network share that works when copying individual files may not be reliable when tens-or-hundreds of files are accessed simultaneously.
Many network shares donāt fully & correctly implement file and range locking. (Docker/Windows and the CIFS ānobrlā mount flag; everything that has been written on SQLite and NFS.) This can lead to terrible performance, or even data corruption.
Most network shares are dramatically slower than local storage.
I typically use something else - like virtualization-owned NFS or iSCSI - to access any shared data, and I present it to guests (or containers!) as a drive.
For those that find this later. I setup a node with local storage and required that plex run on that node, everything worked perfectly and incredibly fast. I no longer trust NFS at all, also had issues with nextcloud. Donāt believe anyone when they tell you NFS will work without issues and that it has file locking, itās only a matter of time until you lose data.
Iāve since reinstalled my clusters and setup a disk on each node (all this is in hyper-v so the additional disks were also virtual disks attached to the nodes), and then have gone with longhorn as a disk provider. Longhorn and other similar solutions use the disk on each worker node along with iscsi to create their pvc solution spread out on multiple nodes. Longhorn and others also create multiple copies of data so if you lost a worker node the data wouldnāt be lost making for a much better technique in the kubernetes world of a distributed solution.
Based on how well this works I suspect a server hosting an iSCSI solution would work just as well. Good luck out there and thanks again to Plex for such an incredibly high quality media server solution.
Additional note if someone reads this and decides to try out longhorn. I canāt speak to other similar solutions, but with longhorn if you are running just a couple worker nodes youāll need to adjust in the settings the number of duplicate copies of data. The default is three and if you donāt have enough worker nodes with a disk being shared it will be unable to allocate the pvc, this can be changed to less. Also careful you donāt accidentally make longhorn viewable by the wrong people via an ingress.