Plex unstable a few times daily, requiring a restart

I’m not exactly sure where to begin…but I’ll make a go for it.

I began setting up a homelab recently and decided I want to setup docker for as many appliances as possible. Plex, sonarr, radarr, unifi, plexpy, blah blah blah.

Anyhow…back to plex, I spent a fair amount of time toying with docker create and getting the settings to a point where plex was usable, working with NFS shares and up for the most part.

However, hasn’t lasted long as it seems like I need to restart plex at least once per day and on some days, playback will freeze and I need to restart in order to get it operational. This has been happening for almost 2 months and I’ve done numerous searches trying to fix it…and I’m getting impatient, so I’m posting here. I’m still new to docker and have a basic understanding of linux/POSIX, but, as I said, I’m not sure where to start.

Some setup details:
homelab - ESXI > FreeNAS > ubuntu w/docker > containers
FreeNAS - I have two pools: SSD & HDD. All drives in Always On state
docker containers/plex - I use the SSD for all my config/db files and the HDD for storage/media.
devices accessing plex - Nvidia Shield (local) & 2 cell phones, 1 ipad (mix of local and remote)

Let me know if you need more info.

Thank you

Respectfully, just because you can containerize everything doesn’t mean you should.

On Linux, a simple tar archive (tarball as we call it) is more portable than a docker export.
You have a VM. Put Ubuntu as the Guest OS in that VM and load PMS there. Now you have a proper virtual host with proper network peer status. Done.

Docker is a technology which, for some, is their only way to use Plex. You are not bound by this limitation because you have Linux-native Plex available… I strongly recommend you avail yourself of that.

Here’s a more recent log. I see some dropped connections for 2 IP’s. 1 I recognize and 1 that probably shouldn’t exist considering I’m using docker in bridged mode.I’ll have to dig more.

Respectfully, just because you can containerize everything doesn’t mean you should.

BLASPHEMY! Okay, okay, you’re not wrong. The need for me to dockerize was to learn more about the technology and tinker, etc etc. I’m not necessarily against throwing it on Ubuntu. If using bridged mode above…would it matter? I guess I’ll give it a try anyhow to see if there’s a difference in reliability.

linux hairball is more portable than a docker export.

should this matter? I really don’t know. Could you elaborate a bit more?

put on ubuntu…proper network peer status

See my point above about bridged mode. I guess it still adds another layer and that could be a factor for these issues.

I would be happy to explain but,

First, If you’re going to quote me, then please quote me literally. Taking literary license on what I write isn’t appreciated. I am quite capable of saying it wrong all by myself TYVM :wink:

  1. Learning to use the tech? Great. Master it then move on. It’s intended as a client side solution, not a server side solution.
  2. Standing up a proper Ubuntu guest, in the ESXI host, means it’s already bridged compliments of the ESXi host.
  3. In Plex’s Docker implementation, you are presented with a Linux view. This is because it’s based on Ubuntu. Docker, for all its claims at portability, isn’t. I have a Synology and a QNAP. Moving containers from one to another is 50/50 (maybe/maybe not). a tar file, because it contains only you Library data, is all you’re worried about. The binaries are elsewhere on Linux (/usr/lib/plexmediaserver). You metadata is under /var/lib/plexmediaserver/Library)
  4. Your point about layer and bridged mode… You forget what the ESXi host is providing you. It’s transparent

I have used plex in docker for quite some time and have not had any issues until I recently moved to the latest plex version (1.12.2.4929-29f6d1796) and started using NFS (previously had plex running with local storage). I’ve also noticed instability with hangs that seem to require a restart of the container to recover. Since one of the recent changes was moving to use NFS I suspect that at the moment, what are your NFS mount settings (on the NFS server and client side)?

Respectfully, just because you can containerize everything doesn't mean you should.

Just because you shouldn’t containerize everything doesn’t mean you shouldn’t containerize this. Care to share your reasons for why you think plex shouldn’t be containerized? Sure, a tar file is very portable, but are you providing every dependency required for every system in that tar file? Containers provide, amongst other things, a guarantee that every dependency is installed and functioning properly. Your comments of portability are almost certainly related to kernel based issues. If you have two servers running the same underlying kernel then a container can be moved from one to the other 100% of the time. Different kernels? you can build a container for each with two different dockerfiles that are almost exactly the same.

Learning to use the tech? Great. Master it then move on. It's intended as a client side solution, not a server side solution.

Maybe I’m misunderstanding you here, but where do you get the notion that containers are a client side solution? Applications of all sorts are running in containers as servers.

Plex provides an official container, as a customer/user I should generally be able to expect that it functions and that when it doesn’t, I can get support for it

Are you using NFS to mount the config directories into the ubuntu VM? If so, this is likely your root cause. I tried this for a bit with a myriad of options both with NFS and SMB and nothing worked in a stable manner. I settled on putting the config directories directly in the VM itself (actually created a second disk for the VM).

@gbooker02 said:
Are you using NFS to mount the config directories into the ubuntu VM? If so, this is likely your root cause. I tried this for a bit with a myriad of options both with NFS and SMB and nothing worked in a stable manner. I settled on putting the config directories directly in the VM itself (actually created a second disk for the VM).

Yeah, I am doing this currently.

maybe my pisspoor setup and config folder(s) on the NFS shares are contributing to these instability issues.

My esxi host has 3 datastores.

  1. nvme (local) - for freenas and maybe eventually cache partition
  2. ssd (nfs) - runs my VM’s and holds all configs for docker containers. currenlty have 2 VM’s, Ubuntu and Window Server 2016 (to play with).
  3. hdd (NFS) - this is my data pool for media.

Plex/Ubuntu setup:
In Ubuntu I mounted NFS shares pointing to both pools. As I said above, SSD for configs and HDD for media.

I’m a bit braindead from work, so if you need me to clarify more I can.

Following up - I also had the config directory mounted via NFS and that seems to have been the cause of instability for me as well. Plex has been up and stable for almost 2 days now (after 2-3 restarts/day) once I set the config directory to a local drive.

Back when I had my config dir mounted across NFS, the server would lock up about once a week and often corrupted the database. I had to recover the db from a ZFS snapshot 4 times before I concluded that I couldn’t make it work. I changed to a local FS within the VM and it’s been stable since (that was over a year ago).

I’m not sure I understand your exact setup but it sort of sounds like your ESXi host has access to the SSD pool. If you can, it may be worth looking into creating a virtual disk on this SSD pool and then expose it to the Ubuntu VM. Then format that disk within Ubuntu and use it for your config directories. This is essentially what I did.

As a minor point of info.

NFS v2 or v3 does not provide the innate file locking required to run PMS over the network. At best, and still unsupported, is the addition of local_lock=posix to the mount options. NFSv4, while supporting locking natively, still have some issues on the server side. Locks are not always respected because they are, by nature of NFS, advisory locks not mandatory locks.

Unless otherwise called out, mounts are typically NFSv2 or NFSv3. Both QNAP and Synology NAS will only default to NFSv4 if v4 is enabled.