Roadmap to allow network share for configuration data?

In the IT world when you setup standalone server you spin up a VM with a C and D drive. The idea being that data related to the server will go to the D drive and if the system needs to be wiped and reinstalled, the C drive can be reinstalled without losing any of the application data.

Containers continue with this theme in that the container image is considered ephemeral, and that it could be replaced with a newer version at any time. This strategy requires that data be stored either using a database, such as a database running in its own container, or via a mounted volume.

After some time passed ā€˜container management’ systems began to appear, such as kubernetes. Here exists infrastructures where all who understand the values of containers end up. Using kubernetes as an example (as I know most about that container management system) the idea exists of persistent volumes, which are allocated as needed via a provisioner. This is always a network share.

The PMS docker image has wording indicated that you use a network share for your configuration at your own risk, and that there are issues around file locking with many network share configurations. On pages around network storages here on the plex forum there are notes saying if you use such a solution you must ensure that the server itself is on the same network share as the storage solution.

Wanting to use kubernetes for plex, where I have all of my other containers (as container managements systems are pretty awesome), I’ve seen the issues which occur when you try to use a network share with plex… and its pretty tragic. This essentially requires me to spin up a vm just for this one program, which is a bit of a bummer (not to seem ungrateful, plex is awesome and i’m happy to setup a vm just for it). There are many advantages to a container management system which are nice to have such as automatic certificate generation, easily roll up to the next version of an image or rollback if there was an issue, etc…

I’m curious of the internal design of plex such that it has difficulties supporting a network share with configuration settings without becoming corrupt or otherwise experiencing odd timeouts, and whether its on a roadmap somewhere to support such a setup (or if it is very unlikely to ever happen)?

The problem with metadata on a network share resides solely with NFS / CIFS.

Neither of these protocols support the absolute atomic file locking required for a SQLite database.

There is nothing preventing the use of block protocols such as iSCSI but that does defeat the purpose.

2 Likes

Plex uses normal operating system calls to open, lock, read, write, and close files. If those are completely reliable, Plex will work fine with a network share.

However …

Many network shares aren’t perfectly reliable. A network share that works when copying individual files may not be reliable when tens-or-hundreds of files are accessed simultaneously.

Many network shares don’t fully & correctly implement file and range locking. (Docker/Windows and the CIFS ā€˜nobrl’ mount flag; everything that has been written on SQLite and NFS.) This can lead to terrible performance, or even data corruption.

Most network shares are dramatically slower than local storage.


I typically use something else - like virtualization-owned NFS or iSCSI - to access any shared data, and I present it to guests (or containers!) as a drive.

1 Like

I could switch from NFS to iSCSI.

Would it be possible to integrate with another SQL database option besides SQLite?

I have a developer background.

For those that find this later. I setup a node with local storage and required that plex run on that node, everything worked perfectly and incredibly fast. I no longer trust NFS at all, also had issues with nextcloud. Don’t believe anyone when they tell you NFS will work without issues and that it has file locking, it’s only a matter of time until you lose data.

I’ve since reinstalled my clusters and setup a disk on each node (all this is in hyper-v so the additional disks were also virtual disks attached to the nodes), and then have gone with longhorn as a disk provider. Longhorn and other similar solutions use the disk on each worker node along with iscsi to create their pvc solution spread out on multiple nodes. Longhorn and others also create multiple copies of data so if you lost a worker node the data wouldn’t be lost making for a much better technique in the kubernetes world of a distributed solution.

Based on how well this works I suspect a server hosting an iSCSI solution would work just as well. Good luck out there and thanks again to Plex for such an incredibly high quality media server solution.

Additional note if someone reads this and decides to try out longhorn. I can’t speak to other similar solutions, but with longhorn if you are running just a couple worker nodes you’ll need to adjust in the settings the number of duplicate copies of data. The default is three and if you don’t have enough worker nodes with a disk being shared it will be unable to allocate the pvc, this can be changed to less. Also careful you don’t accidentally make longhorn viewable by the wrong people via an ingress.

ā€œLonghornā€ is such a confusing name. Or maybe I’m old.

I still think of that as Server 2008. :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.