Increase use of available resources

Sorry for the mis-post. I’m at 54000, and some change currently. I’ve been adding to my collection a bit at a time, trying to keep it usable during imports.

Does seem considerably faster after the database repair and clean process, will continue to test. Thanks!

Yup, she’s SCREAMING fast compared to before, thank you for your work on this optimizer!

This fix to it no seeing the environment as valid was to copy it inside the container from my media storage / file storage NFS share to the machine’s disk, and off she went.

If your DB is normally on NFS? Try and keep if off it if it is, NFS will help mess it up as well as limit the read/write no matter how fast the ZFS is.

No, only media.

Is the PMS metadata (and obviously DB) stored on a ZFS SSD filesystem?

Yes, it’s block storage on Proxmox. ZFS array with enterprise NVME SSD’s. 3000 MB/s minimum and over 300000 IOPS for small block transfers.

excellent. Did you set the Plex databases record_size using the DBRepair option?
( 65536 max record_size and must be a power of two and (best) match the dataset record size or be a perfect subdivision of it.

@sirebral

So that I may stay up to date on tech, may I ask which SSDs (mfg & pn) you’re using?

I have it currently on a pool with 10x1.6TB Micron 1700 Maxx Enterprise NVME SSD’s I also cache a lot in my ARC. I allocated 512GB (half of my RAM on one host) to ZFS ARC. ARC can step down to accommodate other processes, yet I’m not even close to that on the 3-node cluster.

Wow, and I thought I was willing to purchase quality gear … VERY nice!

So what does your overall performance itself look like now @sirebral? Does it seem like you still have way more assigned to that VM that is really needed or is it using more of the resources? It seems like a performance bottleneck was fixed but not sure you will ever use all of the resources you have assigned to it unless you are slinging subscriptions out to the world lol. I am curious just how loaded a large instance like this can get.

It’s running quite well. You are correct. I have more assigned that I need, yet it’s in LXC, so it’ll only use the resources that it needs, leaving the remainder to other processes. I’m about to redo the cluster, so I’ll leave it for now, yet adding a new node as well as upgrading my storage and migration networks to 100 gig. At the same time, I’m going to just rebuild it, saving my data, so there’s not as much new configuration to do. Looking at Rancher with Longhorn (or perhaps Ceph, will demo both), and Harvester or Kubevirt. There is lots of research to do coming up very soon, yet it’s my favorite kind of work :wink:

I was hoping you would come back and say that thing is humming. I would love to see what it would take for a plex server like yours to get maxed out.

Yeah, it was a really old DB, so it probably had lots of issues, happy to say it’s quite happy now. This cluster does lots of other things, including running my business, yet it’s still pretty quiet. I built it for redundancy more than anything else, as it’ll be hosting client data soon, hence the upcoming rebuild.

Hmm, do you allow any outside access to your Plex server? You are brave sharing customer information on the same iron your Plex server is on. Best of luck.

Customer data will be living on the newer systems, part of my rebuild. Not going to mix the environments. I’ve got 3 older servers that are off and will become my personal lab. I even have separate cabinets for the 2 clusters in the Colo, not worth the risk of mixing the two. Only access is to my household, and it’s all locked down via a point-to-point from my home router. I used to run the gear at home, yet I moved and didn’t have the space that I did previously, so I worked it into the deal while I was setting up my Colo agreement for the biz.

Sounds great man!!! Like I said, best of luck with it.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.