Best Virtual Server OS

Hi All

I want the best Homelab OS for things like Plex, Docker, and some VM’s - in the future I might even throw in a Virtual Game server?

Criterias:

A) Stable and with a okay UI

B) Intel CPU based server (Hyper-V; Quick sync)

C) NO NAS NO real storage needed! thats all running on a seperate NAS

D) SPEED - Main porpose is for the PLEX server (Just SSD/M2 for max performance OS, Docker and VM’s)

E) Scalability, Nice usage overview of load - Primary all/most CPU’s to Plex and some if available for 1 primary VM as my new desktop (RemoteDesktop!) and I would love to have docker!

So I have read ALLOT of posts and have found the following candidates:

  1. unRAID

  2. FreeNAS

  3. Proxmox

  4. VMware vSphere Hypervisor/ESXi: The Purpose-Built Bare Metal Hypervisor

  5. Xen hypervisor implementation

  6. Openmediavault

  7. Ubuntu KVM solution

  8. Windows Server 2012/2016

And depending on above OS what would the best choice for Plex to run on/in? A Docker,VM with CentOS, Win 10, Ubuntu?

Found this from Byte My Bits

Most people here have recommended unRAID and it looks promising, but I don’t really need a volume with standard drives. Backup of the whole server would managed and located on the NAS.

I guess I could just put everything like Plex, Docker and my VM’s on a SSD cash drive?

Looking forward to hear what others have done, when NAS storage is not part of the equation!

Yeah if you are trying to do it with no real storage local freenas and unraid are mostly out as their main purpose is to replace your nas and control the storage

Just do an ubuntu desktop (server if you are okay with no gui) - works great with docker and plex- quicksync works great

I am currently hosting a test plex install on docker on ubuntu desktop and it is working quite well

Sorry my first requirement is that I would need some UI, would Proxmox then be what I need?
The PLEX docker is that the one from linuxserver/plex auto update should work with this one…

If you need ui - then go with ubuntu desktop - i used the official docker image, not the linuxsever/plex one but i really don’t know if one should be prefered over the other

Never used proxmox - mine is built on hyper-v - but i see no reason that wouldn’t work - looks like you may be able to deploy the container directly to that - looks like it is ubuntu based so should work

looks like the work has been done for proxmox… @ChuckPa is the resident linux expert and he has written a guide on building plex for this environment…enjoy

Strange I thought that my new friend @ChuckPa used the Xen hyper solution (I was reading this post Linux VM best practice - #28 by haertig) I would like to now what his experiences are seems he has tried allot of things :slight_smile:

Hi @casperse, I would highly recommend Proxmox.
The reason is, it has a nice and decent Web-UI (this means usable/controllable from ANY device with an web browser) loaded, but not overloaded, with functionality. The setup is quiet simple, the performance is outstanding if you use Linux due to the fact Proxmox uses KVM on Debian.
Also the storage managemant is really nice while sometimes a bit confusing. But it has ZFS and CEPH support which is especially in scaling environments just awsome.
Clustering is also possible.
The community edition has no drawback. You just dont get every single update but only the open repo.

I for my self am using Proxmox since 3 years after VMware, Hyper-V and XEN. They all are nice, but non of them can beat Proxmox when it comes to flexibility, sustainability and scalabilty.

Tl;dr: I would recommend using Proxmox to host a CentOS-VM for Plex.

1 Like

Myself I’m using ESXi on the metal and Ubuntu 18.04 as a VM with Docker containers.

I’ll just give you my rationale for each so you can decide on the merits:

  • ESXi allowed me to transition on the same hardware. I was previously using FreeNAS but found the upgrade process cumbersome and some things (like the UniFi controller) were exceedingly difficult to install/run. Anyway, I currently have my old FreeNAS and my new Ubuntu servers running as VMs and I’m slowly moving items from one to other other. I’m going to keep ESXi on the metal after I’m done and likely create a Windows VM and PCI passthrough a graphics card to run steam link. ESXi has a nice web interface for its configuration (the console is only used for emergencies).
  • Ubuntu: I’ve used it for years and it now has built-in support for ZFS (I’m booting off ZFS these days). In fact, monitoring the ZFS development, it seems that the main focus in Linux where as in the past it was FreeBSD that had the best implementation. I personally use Ubuntu Server instead of the desktop b/c I just use the shell, but the desktop will allow you to administer most tasks.
  • Docker: The upgrade process is pretty nice when you use docker-compose. I modify a volume mount in the compose file (b/c I’ve now moved that data), and run docker-compose up -d and it modifies the container necessary and restarts them. I also don’t have to worry about updating a package in Ubuntu and it breaking some program that I’m using in a container because containers have their own user-space.

Since you mentioned the storage will be elsewhere, you should be aware of a few gotchas:

  • If you store the database for PMS on a remote mount, if it doesn’t properly support file locking you will end up with a corrupt database. SMB will corrupt your database and many NFS implementations will as well. I’ve personally not gotten a remote mount that works but a few have.
  • Docker in Windows use SMB for volume mounts. This will give you the aforementioned corrupt database.
  • FreeNAS is based on FreeBSD and the FreeBSD kernel does not contain a sensible file change monitoring API so there are no automatic updates to the library in response to filesystem changes. Though you mentioned the media is going to be on a NAS so you won’t have this anyway.
  • FreeBSD installations cannot use premium music libraries. The dependent library that PMS uses for premium music libraries is not built for FreeBSD so it does not get this feature.

Hope the above helps in your decision.

Thanks information like this is GOLD!

So since I don’t need media storage and only need storage for local PMS/Docker/VM’s and can have my NAS run a weekly backup of everything it seems unRAID and FreeNAS will be to much trouble having volumes and cash/parity drives just for this.

@gbooker02 your ESXi install is it vSphere 6.5 - is this the same as ESXi 6.5? (Sorry new to this but I am a fast learner)
For PMS I was planning to have everything running on this new server including DB and only mount the media storage to my NAS (NFS/SMB) I really want browsing the PMS to be fast and keeping everything on SSD/M2 will help accomplishing this.

I don’t know Docker-compose would that work with Portainer? (I would like to have a visual presentation of everything running in a UI)
Currently I run all my Docker on my NAS which have a nice UI, but I always install and configure through the command prompt its just easier that way - But like the overview of used CPU and RAM in the UI

How did you run PLEX today? as a Docker or on Ubuntu VM?
Does ESXi provide a nice web UI with all CPU & RAM usage?
Is the free version enough? I think they have a 8 Core limit?

@Mephman I don’t really need storage management since everything is on a seperate NAS.
How do you manage Docker on Proxmox?
Is there any special “ADD ONS” that you would recommend for Proxmox?
Are you also running PMS on a Cent-OS-VM?

Again thanks for all your valuable input!

BTW: Can both solutions do VM with PCI passthrough to a graphics card to run steam link?
@gbooker02 I highly recommend the Unifi controller its cheap and rock solid, Im done with playing around with Unifi in Docker!

Yes, it is ESXi 6.5 (6.7 is out but I haven’t upgraded it yet). ESXi is the hypervisor component to vSphere. I just use the ESXi part (and the free version at that).

Your plan on the databases being local should eliminate the corrupt db problems I mentioned. I don’t know about Portainer so I can’t really answer to it.

In docker in the Ubuntu VM on ESXi.

Hopefully this answers your question:


It provides that on the per-VM and the host as a whole. There are other graphs as well.

2 physical CPU limit (sockets, not cores and looks like 6.7 changed this to a 480 logical core limit), 8 core limit per-VM. The RAM limit was removed in version 6.

I’ve not tried a graphics card yet but I am currently using PCI passthrough for 2 HBAs to one VM and another HBA to a second VM. Absolutely flawless on this.

You should note that people sometimes have issues doing PCI passthrough to multiple VMs. The PCI ports are assigned to various IOMMU groups and you cannot PCI passthrough two slots on the same IOMMU group to different VMs. I did not have this issue. I’ve seen most of the reports on i5/i7 and Xeon E3 class procs where as mine is a Xeon E5; maybe that’s the difference maybe it’s not.

I presume you mean the “cloud key”? I’ve been running it in a docker container without issue for about a year now; it was a FreeBSD jail that was nearly impossible.

Yes cloud key :slight_smile:

Anyway, I was reading your post and another article from The Geek Pub:

This box is home network server running VMWare ESX, meaning it has multiple virtual servers running on it all at once. This server is actually the second server in a pair forming a redundant VMWare ESX cluster. These two servers connect via iSCSI to a Synology RS2416+ to share a VMS storage volume that contains the operating systems and applications.

I am trying to understand what the benefit is of having the Plex server connecting through iSCSI on the NAS. Obvious downside would be that it can only run 1 instance of VM and having everything saved externally or is it still ESXi running on the box and just the VM’s being protected on the NAS? (Wouldnt need to back up the server then? Or is this a totally other solution based on some other VMware software? Again sorry if this is a stupid Q.

Thanks for the picture!

Hi Everyone… I found this thread and wanted to just pass along to you what I’m using for my Plex environment as it seems to fall line with your conversation…

Here’s what I do…

I look for older HP storage servers. Currently its the G6 & G7 versions. I look for versions that are ESX compatible, because I run ESX bare metal and love it, hands down. I’ve tried installing Plex and others on the box directly, and in the end, I don’t want multiple boxes in my house, one for each server. So VM’s it is. I run about 10-12 VM’s on my machine at any given time, including Plex. My previous server of choice was an HP DL-160. It held 8 3.5" LFF SAS drives, and a couple SATA. This machine had dual quad core 2.4Ghz CPU’s and 48GB of ram. I ended up with about 15 or 16TB of useable storage. And I got this one for free since a customer was upgrading, but you can find it on eBay for under $200 for sure. You just might need to add a few things to it. It was a beast. But it was a 2U rack mount server, and nothing about those things is quiet. So when I knew I was going to be moving it into my office area soon, I started looking for other options. That’s when I found my current server. The HP ML350 G6. This machine is very nice, but only holds 6 3.5" LFF SAS drives. Since I wasn’t using all of that storage I had in the last one, I decided to step it down just little bit from 15TB to 11TB.

As you can tell, all of my storage is internal to the machine. This machine is about the size of a large desktop tower, plus a few extra inches. It has redundant power supplies, and RAID, etc. It is worth getting one and they too are cheap.

As far as my VM’s go, I give every VM a 120GB OS Drive and I am running 90% of my VM’s on Server 2016. I am a Mac guy, but I have to say that Microsoft did it right when they built the server 2016 version. They have cleaned it up a lot in my opinion. So much that I setup 2016 on 8 or 9 VMs, including my Plex VM. I give my Plex machine 4GB of ram, and only 12 CPU cores. With 12, I can transcode 3 streams at once without any hiccups. But I do not store my media on my Plex VM. I have a file VM that I use for that.

I just acquired another ML350 that I am about to build, and I am going to migrate my Plex VM over to it soon. I’ll build a dedicated network between the two hosts so that PLEX will reach out to my File server directly through that link to retrieve the media, and then it will stream it out another, 2nd NIC to the network to the users. I figure I’ll be doing that in the fall this year when things start to slow down for me.

Hopefully this helps someone!

-Isaac

+1 on ESXi.
It is rock solid and barely consumes any resources at all.
Install it onto USB stick you have gathering dust, boot your server from it and away you go. The USB stick doesn’t need to be large, actually it is a waste if you install it in anything over 8 GB.

I forgot to mention that I too installed ESX on a USB stick and boot from it on mine. I love doing that. Super easy to test out other versions without breaking the current one.

@casperse

If I may point you to the correct author of the ProxMox tip (credited in the tip itself) Johnnyh1975

@ChuckPa Duly noted : Installation guide for PMS under Proxmox 5.1 within an LXC Container
Contributed by: @Johnnyh1975

So what Virtual OS are you using today ChuckPA?
Proxmox or Xen hyper solution or ESXi?
Any recommendation from you?

@earratia & @Isaac_A Why on a USB? wouldnt it be better to install it directly to the SSD? or is it designed to run on a USB?
And are you both just running the ESXi part (free version). Or also the hypervisor component to vSphere?

You install it onto a USB stick, or an SD or CF card (basically any type of cheap removable flash storage because it does not need anything else. Actually, anything else would be a complete waste.

  1. ESXi 5.x uses 1 GB of disk space. ESXi 6 uses about 4. Any additional space on the disk will be wasted because it becomes unavailable to use.
  2. Once the machine boots, ESXi is loaded into memory and hardly (if ever) accesses the boot disk at all. So, it doesn’t make any performance difference whether you have it on a USB stick or a fast SSD.
    The memory footprint is about 150 MB according to VMWare. That’s it, everything else is available for your VMs to use.
  3. Any cache space ESXi needs will be placed in whatever hard drives you have. I have not installed it on a machine with SSDs, but I believe that if you have one on your machine, it will place its cache there.

To address your other question: ESXi IS the hypervisor. vSphere is the client you use to manage ESXi servers remotely.
The ‘Suite’ of both products together is called the vSphere Hypervisor. It is free to download and use.
The paid version introduces additional products for data centers to manage large numbers of ESXi servers at the same time.

https://www.vmware.com/products/vsphere-hypervisor.html

@earratia Thanks! guess I have to buy a 8G solid USB stick!
So reading up on what limitations there are in the free version.

How do you run backup of your VM’s?
If they are accessible on the drive I can schedule a backup on my NAS to my NAS?

Found this: https://www.veeam.com/virtual-machine-backup-solution-free.html

It look like snapshots are still available in the free version.
Any other limitations?

Reading up on this I think my options with ESXi is larger than with Proxmox…

Regards
Casperse

The idea of using a USB stick is just the OS install. In this type of installation, you would put the VMs on another attached disk or iSCSI device or similar. Another possible installation is to put both the OS and the VMs on an SSD (or spinning disk) and then the VMs are locally installed.

So, in your case, putting the VMs on an iSCSI mount from your NAS would protect it against drive failure. Additionally if you have multiple nodes, then you can do a failover mechanism (I assume; I’ve not played with this).

I can confirm that snapshots are available in the free version (I’ve used them myself).