Plex server in a VM/container

It’s time to upgrade the server. It’ll be a dell r710 running proxmox. My old plex server was on centos 6 with no vm or containers. It ran beautifully for over a decade.

So since I can run any host I want in a vm, are there benefits/disadvantages to any particular host software over the others? I like the thought of putting it into an lxc for lower overhead. But should I spool up a Windows XP vm for it because it was the most stable and feature enabled in plex history? Or maybe a mac os? Without any knowledge on the subject, I would be tempted to install a CentOS 7 vm and put plex in there. I do seem to remember plex on linux was a bit of a bastage child for a while with all of the effort to keep packages up to date and forum support coming from @ChuckPa. Maybe there are a few more mod’s on the windows side?

Someone suggested running an ubuntu vm with docker running inside it. I got a little cross eyed and almost passed out. Are people really doing that?

VM adds overhead. Docker is, imho, just a royal PITA. I have 3 Docker implementations and it’s not portable to any of them.

PMS itself, on a native Linux desktop/workstation configuration is as portable as a native Linux tar file (tar ball).

In the VM or a container, you deal with networking. - Not difficult in a VM, but need to remember to make a bridge, not NAT (causes double NAT otherwise) . Docker is NAT automatically in all cases I’ve seen. Never found a way to make it “bridge” without standing on my head (hence PITA)

Centos 7 or Debian 8+ and you’re fine.

If one of the reasons you’re leaving Centos 6 is because of the lack of SYSV-init support, I have something which might interest you. I have Universal packaging in forum preview testing. Currently we are previewing debian packaging. It is about to go into production. Quickly after that is RPM packaging.

Let me know if you want more info?

OK, you have confirmed my suspicions then. Thank you for the response. I may lurk around for a while longer to see if any mac or windows fans show up to make a case.

As for the reason for leaving CentOS 6, we touched on it in this thread. Looks like my limitation is glibc version. Either way, it is way past time to upgrade my software. Send me the link to your information though. I’d love to read up on it. Is there anything I can do to contribute to the effort?

If the limitation is glibc, then the work I’m doing now will open a number of options to you.
I had to think about how to best make PMS portable and customizable in any usage case.

If you’re interested, this is where I’m headed. I always welcome feedback .

1 Like

If you dig a little bit deeper into Docker, the possiblites are not limited to NATed bridge networks. Out of the box you can use the hosts interface or create/use a macvlan network. If you unlock experimental in Docker, you can even create/use an ipvlan network on layer2/layer3. If a container uses the macvlan netwok, the behavior matches the behavior of a bridged-vm network interface.

The “experimental” nature of that is why I can’t use it. I unfortunately need production stable.
I know, just about anything can happen with VLANing. My point with “Standing on my head” is that I shouldn’t have to. One of Docker’s claims is how easy it is to use whereas I can stand up a full VM, bridged adapter, with OS updates and DNS/DHCP in the gateway in less time than I fuss with a docker. Maybe it’s purely my amount of experience, maybe it’s something else and maybe I am just too thick headed to learn a different way.

I do know, I can’t possibly test PMS on different distros with Docker so perhaps that usage case alone is why I am so unfamiliar with it

This is not my thread and I won’t discuss my issues with Docker further. It exists and many like it. I, as one person, don’t .

Oh, please do. The purpose of the thread was to gather info on the “best” platform to run pms virtualized. I’m with you on docker though. I haven’t been able to do much good with it. The attraction is the ease of updating some apps (destroy the container, rebuild automatically and access mounted data for config, all automatically). In the case of pms, the update process is pretty simple (even for linux with your new universal installer) so I don’t see the benefit.

I’m still teetering between containers (lxc) and a full blown vm. I’ll be interested to see how your installer works in an lxc container. It looks like the complicated part is pass-thru of the transcoding hardware if you want to do that. That is all host based so once done, to the universal installer it should be transparent.
If the wind blows toward full blown vm, linux seems to have the least overhead but maybe a light version of win10 would work just as well? I dunno. That’s the purpose of the post.

I have all the supported Linux distributions, and those I’m evaluating in VMs.
Each VM is a bonafide host on my LAN (bridge mode).
At any point in time, I might have 8 different VMs running testing PMS or my own work.
All this runs on my NAS (QNAP TVS-1282-i7). I’ll be building (hopefully) an AMD Ryzen 9 3700x, if everything continues as it appears, to be my new workstation.

Perhaps arrogant, for which I apologize, my time is too valuable to fuss with it overloading things and complicating by adding vlans / containers. The VM is a “host in a can” and free standing. It doesn’t rely on the Host OS because it has its own OS. I have always seen Containers as the way to run an appliction, written for another OS / environment, in a foreign environment until the native application can be developed. That’s where this abstraction layer stuff ALL began.

I have no vlan in my network - it is not mandatory.

In Docker macvlan networks ever container will have withs own mac and ip for then work, with the experimental ipvlan networks every container uses the hosts mac and has its own ip. The parent network interface may or may not be connected to a vlan. That’s up to your network requirements.

My macvlan uses a subnet within my local lan’s subnet and every container directly interacts with every other device like a vm (except the kernel prevents direct communication between the host and macvlan client ips). The macvlan subnet is outside my dhcp assigning ip range, as docker brings its own dhcp to the table, which will only assign ips in the macvlan subnet. In fact using docker containers with macvlan reduces the overhead running a containerized process a lot - it has close to no overhead compared to running the process directly on the host.

I have hard times to accept that each application should have a standalone vm, each one depending on a layer that has to synchronize/orchestrate each hardware access between the host kernel and all the kernels of the vms, each one wasting cpu-time and ram for os processes that are not realy required by the application I want to run and of course the occupied space on the storage. The only advantage on VMs is that you can apply all your knowledge about an os without having to learn something new.

I am not saying that using VMs isn’t legit. There are plenty of cases where VMs are inevitable.

I stumpled accross Docker 5 years ago - I am loving its efficiency, the cleanes of how my persistant data is stored and my setup is well documented in Dockerfiles and docker-compose.yml files that are versioned on a local git server. The trouble starts with multi node, multi cluster operation.

I have a challenge for your Docker configuration:

Run 4 different PMS instances, operating concurrently.

  1. Each on a different OS (Fedora, Centos, Debian, and Ubuntu)
  2. No VLAN (other than default wired vlan in a switch) required. All are on 192.168.0.x
  3. Each has its own IP.
  4. All 4 must exist on the same physical host.

If Docker can do this then I submit that VMware is doomed to go out of business in favor of freeware.

To play devil’s advocate, I can see running 4 instances of pms in 4 docket containers. The OS wouldn’t be important unless you were working on them for development. For that use case, vm’s are superior. If, for instance, you wanted 4 different pms’s because this was pre-user era Plex and that was your solution for permissions, I think it would be lower overhead and easily maintained compared to 4 vm’s. In other words, to each his own. For your uses, it certainly looks like the vm’s win.

I can’t even speak to avoiding vlans with docker.

As welbo already stated: it is possible.

There is no need to introduce any vlans into your lan(!). No one stops you from using different vlans, though no one forces you to use one. I use a consumer grade DSL router which is not capable of managing vlans - except for the WAN port configuration.

When it comes to Docker macvlans, the only complicated part is to fit your macvlan iprange in your network’s subnet, in such a way that there is no overlap between the ips your networks dhcp server assigns or any manually assigned fixed ips.

Take the CIDR calculator of your chose and find yourself a free slice in your network.

By example:
My network is 192.168.200.0/24
My gateway is 192.168.200.1 (this is my dsl router)
My network’s DHCP server assigns ips in the range .100 to .200.
My fix ips are assigned from .1 to .63

The macvlan range can be 192.168.200.64/27, which will allow the range 192.168.200.64 - 192.168.200.95, which result in 32 ips that can be assigned for containers. You are free to choose whatever CIDR you want to pick, as long as it does not include a single occupied ip.

A macvlan network can be created externaly from the cli. This will allow plain docker containers (docker run, docker-compose) to attach to the network and either assign a fix ip of your choice or let the macvlan network’s dhcp server handle the assignment.

docker network create \
  --driver macvlan \
  -o parent=eth0 \
  --subnet=192.168.200.0/24 \
  --gateway=192.168.200.1 \
  --ip-range=192.168.200.64/27 \
  macvlan0

A container would use it like this: docker run --network="macvlan0" --ip="192.168.200.64" ...

Or you can leave the lifecylce to docker-compose and declare it in a docker-compose.yml:

networks:
  macvlan0:
    driver: macvlan
    driver_opts:
      parent: eth0
    ipam:
      config:
      - subnet: 192.168.200.0/24
        gateway: 192.168.200.1
        ip_range: 192.168.200.64/27

To use it with a container you would declare it like this:

service:
  myservice:
    ...
    networks:
      macvlan0:
        ipv4_address: 192.168.200.64

Though, there can be only one macvlan network using the same gateway!

If the Docker host is running in a vm, of couse promiscous mode needs to be enabled for the vms network interface. If you use ESXi, the vswitch needs to allow promiscous mode.

I found two nice description of macvlan and ipvlan: https://sreeninet.wordpress.com/2016/05/29/macvlan-and-ipvlan/ and http://hicu.be/macvlan-vs-ipvlan

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.