So I have been running my Plex Server on Windows for years but the latest round of MS patches caused terrible slowness. Now I have installed Ubuntu and am trying to run plex in a docker container. On Windows PMS, under “edit library”, I have always directly pointed to my NAS share folders by UNC path (\ip_address_of_nas\share_with\Movies) but this doesnt seem to work in ubuntu/docker.
I have tried installing cifs-utils inside the container and then using (smb://ip_address_of_nas/share_with/Movies/) with no joy. Anyone know of a way to make this work? Otherwise I suppose I will need to install autofs inside the container and mount the share that way but I would rather not. More moving pieces to fail in my mind.
Ok so maybe add a smb mount to your main Ubuntu and add it to fstab and again add that path variable to your docker instance so it can read and write to the share?
thank you nokdim, yeah I could do that but that’s what I am trying to avoid. I guess I am just trying to confirm the UNC path for folder locations is a Windows only feature?
If I must do that I will at least go with autofs inside the container so its not dependent on the host system and can be moved. I am thinking maybe I want to use Docker Swarm in the future to move this container around from one system to another so putting it inside the container is more elegant in my mind at the cost of a bulkier container.
well I ended up installing autofs, cifs-utils, and samba-client in my container but it wasnt util I ran the container --privileged that autofs worked to mount my CIFS share to /data incase anyone else has this issue. I really miss the UNC addressing straight from plex that Windows allowed me to do but I guess that was a feature of the OS.
Nokdim already wrote how the current best practice looks like: mount it on the host and map it into the container.
Direct network mounts are actualy an anti pattern.
I guess I have never heard a compelling reason, why should I run the mount on the host instead of inside the container? In my mind in the container is better as its all self encapsulated with the application. If I want to move this container to a different docker swarm node I wont have to worry about making sure the new host also has the same mounts like the original host did.
It’s all a mater of taste. I for instance, would never grant an internet facing container additional priviliges.
Basicly you are willing to loosen the container isolation (=security) in order to help breaking the single responsibility principle (=network mount is not part of the contained service, is it?) just to have a more comfy solution.
Neither Docker Swarm, nor Kubernetes provide an ootb solution for making mapped volumes available on all cluster nodes. They leave you to use network mounts on the hosts or a clustered FS like GlusterFS to have a distributed FS accros all nodes. Why is that? Because they don’t know better?! I doubt so…
Basicly, your idea is to run the container like a VM… No one can stop you from doing so
Just saying that you can have the same result with less risk on you side