Distributed Transcoding/Scaling

@max0909 said:
+1
for now i’ve got two servers running, which uses the same media files. But both systems have got their own Mediadata base. Like this idea that the second server only support server one for transcoding.

So do you currently have some people (friends/family) using 1 server and other people using the other server?
Or do you do some sort of automatic load balancing?

I am definitely interested in this feature. If I’m going on vacation, then suddenly there a 2-3 people in my house all wanting several movies/shows sync’ed to their iPad, so a distributed transcoding process (Similar to a SETI farm) would definitely help that situation.

Heck, Plex is missing out on an easy opportunity to sell small raspberry pis or custom build servers that work with your system just to add additional transcodes.

A Pi can’t transcode anything

You could just re-encode your media (Plex Optimize or Handbrake) for free so that they don’t need to transcode in the first place. DirectPlay is a beautiful thing. So is Plex Sync.

+1 I would like the ability to run a background service on my desktop that can help take some load off of my media server when its up and running.

@dduke2104 said:
A Pi can’t transcode anything

You could just re-encode your media (Plex Optimize or Handbrake) for free so that they don’t need to transcode in the first place. DirectPlay is a beautiful thing. So is Plex Sync.

A Raspberry pi 3 had a passmark score of about 1500-2000 which is enough to do a single 1080p transcode. With actual distributive processing you very much could scale as much as you want. It was the reason the military did the same thing with tons of ps3s awhile back.

I am not doing 5 encodes per stupid media with over 30tb of current media, that’s just ridiculous and does not scale at all.

So much needing to be learned [edit] by others when it comes to transcoding [end edit].

If you think scaling involves more transcoding rather than less then you don’t get what the goal is.

Also, a world of difference between what a Pi can potentially do and what a PS3 can do when it comes to transcoding.

@dduke2104 said:
If you think scaling involves more transcoding rather than less then you don’t understand what the goal is.

How is storing multiple copies of all of ones content a better solution?
When some of us have 10 - 30 TB of content, creating additional copies at different quality levels is just insane.

Allowing for horizontal scaling of transcoding is an ideal solution for many.

As it is right now, we either have to:
1. do what you suggest, which would require us to get a lot more storage capacity.
2. build a massively powerful single server which will host our PMS, but this has a limit. You can only have so many physical processor cores with so many threads in a single machine.

I agree that people like yourself should have the ability to perform one-time conversions of all of your media, but those of us who don’t want to have to double/triple our storage capacity should have the option to horizontally scale our transcoding.

I wouldn’t necessarily have a bunch of Pi’s doing my transcoding, but people should have the ability to if they want to.

My ideal setup would by my main PMS server would just store the content and handle the user requests. All transcoding requests would be offloaded to other cheap (~$400) Linux boxes that I’d have on the same network as my PMS.
In todays ecosystem, I would look at using a Ryzen 5 CPU with 6 cores and 12 threads for each of my transcoder boxes.

1 Like

Who has to store multiple copies of anything?

The same 1080p MP4/x264/AAC usually DirectPlays with my multiple SmartTVs, media streamers, PCs, tablets, phones or game consoles - both local and remote. My quite modest five year old Pentium PC will DirectPlay 5-6 concurrent streams and barely break 15% CPU utilization.

What were we talking about again? Scaling?

I don’t have any commercial aspirations for my Plex setup.

@dduke2104 said:
Who has to store multiple copies of anything?

The same 1080p MP4/x264/AAC usually DirectPlays with my multiple SmartTVs, media streamers, PCs, tablets, phones or game consoles - both local and remote. My quite modest five year old Pentium PC will DirectPlay 5-6 concurrent streams and barely break 15% CPU utilization.

What were we talking about again? Scaling?

I don’t have any commercial aspirations for my Plex setup.

If it works for you, then that’s good.

Sorry, I misunderstood you, I thought you were advocating having multiple copies… not just direct playing.

But myself, like many of others, want to share our Plex server with friends and family (for free).
My current PMS can handle 3 concurrent transcoding tasks before there is any playback lag.
For this reason, I only share my server with about 3 family members (outside of our house).

My internet connection is 250Mbps down and 20Mbps up. So that 20Mbps can get saturated very quickly if 3-4 people are direct playing 1080p videos encoded at 5Mbps (since the 20Mbps can fluctuate).
All of my Blu-ray rips are at 1080p at 5Mbps.

For now, I have my PMS setup to force transcoding down to 2Mbps for remote playback and it allows direct playback when local.
So when I’m watching from my HTPC, on the same network as my PMS, I use direct play to get the full quality.

@boboki said:

A Raspberry pi 3 had a passmark score of about 1500-2000

Where did you find that? How would one run a passmark benchmark on an Arm processor that can be meaningfully compared to x86?

Excellent point, @drinehart

Plus it is generally accepted that a Pi can’t transcode its way out of a 1080p paper bag anyway.

I did a marathon hunt to try and find that exact info out, and was able to come up with one site, which yay I no longer have on me so would need to hunt my ass off again, but basically was stating that they ran some computational tests and said it pretty much came out matching a core 2 duo E8300 pretty much.

So… with the MASSIVE amount of proof I just provided you certainly can take it as you will. It would be pretty nice for someone to build a transcode measuring tool (linux/windows based) to test things like this on all hardware, I would certainly love to see some hard data not not hearsay… but it all I have to go on so “I want to believe.” The power of real distributed processing has long been known, so not sure why this is a odd thing to talk about in the first place… I mean, if a pi3 can even handle half a 1080p xcode… buying 10 of those is direct cheap.

+1 for the best feature request

This feature request got another like from me. Didn’t see this thread existed, so I’ll quote myself of a similar thread I started (that is now obviously closed).

I’d really like it for PMS to support a load balancing feature, so that PMS could combine multiple systems to transcode for my entire household. Specifically I mean a load balancing feature that would divide a transcoding session up, and splits it among the systems in the cluster.
Ideally such a load balancing feature should be efficiently designed, so that PMS is intelligent in that it only uses the processing power portion of on all system that’s free and not being leveraged at the moment by any other programs running on any given system in the cluster.

I have a couple of game desktops with powerful CPUs in my household doing nothing most of the time of the day: they’re only in use for like 33% of one day, and of that 33% they’re 50% used for gaming, and the other 50% used for low power tasks such as web browsing. So the CPUs of the systems are basically not being leveraged for 83% of an entire day. That’s a lot of processing power potential that could be used for load balancing a transcoding session.

Also implementing a load balancing feature would be very beneficial for Plex Inc in regards to the Plex Cloud service as well. That way Plex Inc wouldn’t have to use Azure, but just rent servers from the lowest bidder themselves and dynamically scale with demand. Heck, it could even open up the possibility for Plex users world wide to voluntarily (=opt-in) offer transcoding time to each other when users have processing power and bandwidth to spare, and hypothetically earn points for perks in return.
The two points above are just brainstorm ideas of me, and not part of the main topic. Just trying to lay out as many compelling arguments to see this load balancing feature implemented.
Before anyone finds this has already been requested: from what I’ve read, a load balancing feature has been requested multiple times, but not in this specific implementation. Unlike most who’ve put out requests prior to me, I’m requesting to basically have the option to combine multiple CPUs of my systems in my household as one big CPU, and therefore split the load among the CPUs simultaneously for every single transcode. I hope this is clear on what I mean.

1 Like

+1 Would also love to see this implemented. Currently running Plex on my Synology DS415+ does not scale very well. Up to 1 stream is ok but when i try to stream just another 1080p movie everything gets stuck. Running multiple plex instances is also not really awesome cause it does not synchronise which episodes or movies you watched.

yes! offloading the database and being able to add multiple transcode servers would be a huge step forward for plex.

It is how most commercial transcoders work.

1 Like

This was always a long time request of mine as well and I’ve mentioned it in the past a few times.

I’m not so interested in distributed encoding anymore as a way to move forward. Plex has released a couple of Betas of it’s hardware GPU encoding which seems like it would pave the way much easier in the future. Instead of distributed encoding I’d love to see Plex support 3 or 4 hardware encoders. I think it would be much easier to use the onboard i7 GPU and slap a couple of ATI graphic cards in the Plex server and call it a day. With the ability to support 3 or 4 high bitrate encodes per GPU or 5 to 7 1080 to 720 type encodes per GPU, this should make it much easier and far less convoluted way of bringing more transcoding power to the Plex server.

Don’t get me wrong. I would also love to see Plex have the ability to use a real relational database such as the free community based version of MySQL instead of SQLite as well. That would make it far easier to run a couple of Plex installation off the same database for additional scale out performance.

But in all honestly I don’t think distributed encoding is the way of the future,
Carlo

@cayars said:
This was always a long time request of mine as well and I’ve mentioned it in the past a few times.

I’m not so interested in distributed encoding anymore as a way to move forward. Plex has released a couple of Betas of it’s hardware GPU encoding which seems like it would pave the way much easier in the future. Instead of distributed encoding I’d love to see Plex support 3 or 4 hardware encoders. I think it would be much easier to use the onboard i7 GPU and slap a couple of ATI graphic cards in the Plex server and call it a day. With the ability to support 3 or 4 high bitrate encodes per GPU or 5 to 7 1080 to 720 type encodes per GPU, this should make it much easier and far less convoluted way of bringing more transcoding power to the Plex server.

Don’t get me wrong. I would also love to see Plex have the ability to use a real relational database such as the free community based version of MySQL instead of SQLite as well. That would make it far easier to run a couple of Plex installation off the same database for additional scale out performance.

But in all honestly I don’t think distributed encoding is the way of the future,
Carlo

I respectfully disagree with your thoughts on distributing transcoding.
But I do agree that Plex needs to ditch SQLite as that’s really just meant to be used for small projects as a way of storing config data and other small data sets.

Distributed Transcoding

Scaling horizontally is the only type of scaling that makes sense for this issue.
What you’re talking about is vertical scaling which has hard limits; your motherboard has a finite number of CPU sockets and a finite number of PCI-E ports.
This limits the number of processors and graphics cards that you can put in a single machine.
Many people have old PCs lying around doing nothing that would be perfect for offloading some transcoding work.

For people who just want to be able to support a couple or even a couple dozen (this would be a far reach) consecutive transcodes, what you’re suggesting could work, if the Plex team implemented what you specified and you threw a lot of expensive hardware at the problem.
But for people who want to be able to share their Plex server with all of their friends and family, your solution would fall short for many.

I think that it would be great if the Plex team would release GPU transcoding into the final non-beta version, but I would rather see distributed transcoding first.
They can clearly do it as they already do it via their Plex Cloud option. But they limit it to 3 consecutive transcodes which is basically useless.

Database

I do agree that the Plex team should stop using SQLite ASAP and switch to a better DB solution… I would push for PostgreSQL… but I don’t think it can be embedded and users would have to install it along with Plex Server :frowning:
I guess, if the free community edition of MySQL can be run as an embedded database, it would be much better than SQLite.

1 Like

I’m with @cayars on the matter of distributed transcoding being an unnecessary investment in time/effort by Plex when it can be handled adequately well now by use of HW transcoding (when that settles in) and/or higher core-count processors.

We don’t need to scale to Netflix levels. IMO, a reasoned choice of media formats resulting in DirectPlay under most circumstances coupled with a modern processor or GPU can meet about 99% of the situations that PMS gets used in.