Distributed Transcoding/Scaling

@MattWeiler said:
What don’t you understand about the upload bandwidth not being capable enough to handle multiple DirectPlay streams at one time?

OK…You’ve got constrained upload bandwidth. Whether transcoded or DirectPlay, a given bit-rate consumes the limited available upload capacity exactly the same either way. DirectPlay avoids beating your CPU to death and consequently limiting how many other streams you can have.

Additional disk space required for holding “optimized” videos is about the cheapest thing there is regarding computers these days. Certainly less than setting up a transcode farm of some kind.

There are some things in Plex already which help you manage/limit bandwidth usage. Are you using any of them?

What about Plex Cloud or Plex Sync? Pretty much tailor-made for situations like this.

If you have limited upload speed then syncing or moving items to the cloud isn’t ideal.

Agreed that those have their own shortcomings in that it takes a certain amount of time to get the media out there. But once you do…

There’s just nothing Plex or any other streaming solution can do to work around constrained Internet bandwidth if you insist on streaming across it - whether transcoded or DirectPlay. “Netflix quality” requires about 5 Mbps for 1080p. UHD maybe 25 Mbps.

What is the actual Internet bandwidth constraint in this situation? If what you have to work with is 10 Mbps upload then you’ve got maybe two or three DVD-quality concurrent remote streams at most if you want to avoid a lot of buffering or actually be able to use your Internet connection for anything else at the same time. There isn’t enough PMS transcoding capacity in all of Christendom to change the basic equation in play.

@dduke2104 said:
Agreed that those have their own shortcomings in that it takes a certain amount of time to get the media out there. But once you do…

There’s just nothing Plex or any other streaming solution can do to work around constrained Internet bandwidth if you insist on streaming across it - whether transcoded or DirectPlay. “Netflix quality” requires about 5 Mbps for 1080p. UHD maybe 25 Mbps.

What is the actual Internet bandwidth constraint in this situation? If what you have to work with is 10 Mbps upload then you’ve got maybe two or three DVD-quality concurrent remote streams at most if you want to avoid a lot of buffering or actually be able to use your Internet connection for anything else at the same time. There isn’t enough PMS transcoding capacity in all of Christendom to change the basic equation in play.

That is not at all correct and plex already has a solution for this, it’s called transcoding. That is why it exists, to take a file in whatever format you have it, and encode it on the fly to work at whatever speed and format your device needs. In your hurry to dismiss our needs, and your refusal to so much as attempt to even understand them, you have completed walled off a decent sized portion of the community, and failed to see past the current moment. 4k is already here, and 8k is around the corner. An nice i7-4950 struggles to do more than one stream of 4k content, and the bulk of the hardware out there does not play 4k (HEVC)directly. Even Chromecast Ultra, which SHOULD play it more than half the time has to transcode due to lack of device profile support from plex. The needs and reasons go beyond that easily ranging from storage costs, hardware costs, user load, devices in use, 3rd party devices (sight hearing impaired) and beyond. Every tired “solution” you are giving are all ones we are all already blisteringly aware of and they just do not work, cost too much, or are a burden we do not feel we need nor want to have to deal with. We are asking for the solution we want, we have shown that the tech mostly exists already in their code and there is no decent reason thus far that has been presented as to why this cannot be implemented. If Plex needs to force a very specific build for our transcode boxes, like Ubuntu 64 only, or cent or even windows, we do not care, we can work with the limitations presented as long as we can get what we need.

As people have already stated, multiple times to you, this need is obviously not your own. We respect your right to not need it, you have voiced your opinion and at this point any further comments from you are intentionally negative, rude, unwanted, and not useful. You have argued with people, tried to force your own views on us, your own needs as if they are the only ones that exist and it is growing increasingly tiring and toxic. Please discontinue commenting in this thread in such a fashion if all you wish to do is try to derail our request. If you have a request that you feel is more important, that is fine, if it is a good request I will even support you, but do not poison our request for your own perverse benefit, nor try to ruin what we have all deemed as necessary tech moving forward for your own selfish reasons. If you continue to be so rude and toxic in this thread I will start reporting you to the mods and let them deal with you as such, this is supposed to be a friendly helpful community of like minded people looking out for the best interests of each other, not a shouting match of rudeness and selfishness like it’s Facebook.

1 Like

Various things related to transcoding, including a lot of conjecture and hoping, got interleaved in this thread to which I was not attempting to squelch, steer or even acknowledge. Everyone is welcome to dream to their heart’s content and request accordingly.

I was specifically speaking to the point raised in streaming (transcoding or not) apparently across a rather vaguely defined Internet bandwidth constraint. My questions and assertions on that point have not countered with any facts to the contrary. My attempts are clearly not being appreciated by you but then again I wasn’t really speaking to or about you (@boboki) in the first place. Kind of an inherent problem with a forum sometimes - who’s talking to who?

I’m sure the mods keep an eye on things already and will reign me in if deserved. No hard feelings on my part. :slight_smile:

@dduke2104 said:

@MattWeiler said:
What don’t you understand about the upload bandwidth not being capable enough to handle multiple DirectPlay streams at one time?

OK…You’ve got constrained upload bandwidth. Whether transcoded or DirectPlay, a given bit-rate consumes the limited available upload capacity exactly the same either way. DirectPlay avoids beating your CPU to death and consequently limiting how many other streams you can have.

Additional disk space required for holding “optimized” videos is about the cheapest thing there is regarding computers these days. Certainly less than setting up a transcode farm of some kind.

There are some things in Plex already which help you manage/limit bandwidth usage. Are you using any of them?

What about Plex Cloud or Plex Sync? Pretty much tailor-made for situations like this.

I don’t agree at all.

Bandwidth

I wouldn’t transcode at the same bitrate as the source… that makes no sense.
The transcoding would be at a lower bitrate, thus the size of the video being streamed is reduced greatly and the number of bits/second would decrease.
This would allow for more consecutive transcoded streams to be viewed at once.

Optimized Versions

To have all of my media duplicated in several “optimized” versions would cost me upwards of several thousands of dollars in HDDs. While building a simple Ryzen 16 thread machine, no GPU needed, would cost me around $800 and that would allow me to satisfy my needs.
Also, many people have old PCs, or just PCs that are not actively being used all the time, which they could add to their transcoding swarm. For those people, there would be no additional hardware cost since they already own the machines.

PMS Built-In

What features are there built into Plex to illeviate bandwidth usage?
The only ones that I’m aware of are:
1) Transcoding (limiting the quality for remote playback).
2) Optimized versions.

Plex Cloud

My understanding of Plex Cloud is that you can only have 1 Plex Cloud account associated with your Plex account and each Plex Cloud account can only transcode 3 streams at once. This would help for those people whom are just a few consecutive streams over their hardware limits, but not if one needs to be able to do 5 more than their PMS hardware is capable of.

@MattWeiler said:
I don’t agree at all.

My questions and suggestions were based on some hunches. Let’s see where I might have gotten off-track.

-What is your actual upload bandwidth? 10 Mbps? No economical possibility of increasing it?

-What is your expectation about the quality you want to deliver to your remote clients? DVD? 1080? Something in-between?

-What’s your goal for the number of concurrent remote and local streams - transcoded or not?

Yeah, no single source media version can adequately cover both high bit-rate local clients and remote clients operating under a severe Internet bandwidth constraint anywhere along the way especially with multiple remote clients involved. Assuming you’re determined to stream across that kind of bandwidth constraint your choices are either on-the-fly transcode (there, I said it :blush:) or PMS’ Optimize (transcode in advance).

As you noted, Optimize means additional disk storage but I don’t see the “thousands of dollars” involved since a number of name-brand 8 TB NAS disks can be had for $350 or less. How much do you use now?

Assuming truly limited upload bandwidth, a single recent vintage PC (your own setup?) could probably already fill it with transcoded streams. At that point, I don’t how adding another server or making the existing server faster, adding a GPU, etc. is going to have any effect on your already-saturated upload capacity. It just shifts the same resulting transcode output back onto the same choked path.

Use of DirectPlay insofar as possible (and that would include Optimize) improves playback for everyone. In some marginal situations, it could indeed allow an additional remote transcode that wouldn’t be possible otherwise. Are you doing DirectPlay with your local clients now?

I don’t use Plex Cloud myself but do understand how it basically works. (Where’s @nigelpb when you need him!) With Plex Cloud, you’d be bypassing your stated upload constraint entirely and therefore maybe even the need to transcode that high bit-rate media in the cloud in the first place. If so, that Plex Cloud concurrent transcode limit of three wouldn’t apply. But depending upon exactly how much upload bandwidth and transcoding capacity you have to work with, I’m not sure you’d get any more that the same three transcodes from your own PMS instance than you would from Plex Cloud transcodes. The downside of Plex Cloud is that it’s going to take a while to get a library of any significant size pushed into the cloud.

My own Internet bandwidth has a lot of growing room so I don’t normally have to worry much about transcoding for remote clients. But I don’t seen any advantage to be lost by first going with a high Passmark PC. I’m holding off on upgrading my PMS until I see exactly how HW acceleration settles. 4K/HEVC doesn’t really change things that much in terms of my basic strategy - DirectPlay whenever possible even if means a couple of extra disks to accommodate both a 4K and 1K version for the 4K (when I get some) and 1K clients respectively.

You’re missing several points.
Plex Optimize does a terrible job compared to doing the job yourself outside of Plex. The quality it does is basically the same as when it streams.

Thus instead of having to convert your whole library to have a 2nd file or 3rd file to support different bandwidths and resolutions you want to do it in real time.

Ideally if we could do real-time encoding to h.265 you could make that 10Mb pipe act more like a 15Mb pipe with h.264 because it’s more efficient However that takes some serious CPU use or hardware assistance to do. So if you had a distributed transcoder then you could make use of the CPU and/or GPU on another computer to assist the Plex server. For conversion to h.265 this would almost be a requirement to get a few transcodes going at the same time.

You could probably get a good solid 3 HD encodes in that 10Mb pipe size with h.265 at 720p. With ABR coming to more clients very soon this would work even better as the clients could maximize the pipe use.

You could not possibly do this on your present Plex server right now without HW assist.

PS if you don’t think it would be expensive to add additional storage to accommodate a 2nd file then take a look at my stats in my sig and try and put a rough price on the additional storage I’d need.

@dduke2104 said:

@MattWeiler said:
I don’t agree at all.

My questions and suggestions were based on some hunches. Let’s see where I might have gotten off-track.

-What is your actual upload bandwidth? 10 Mbps? No economical possibility of increasing it?

-What is your expectation about the quality you want to deliver to your remote clients? DVD? 1080? Something in-between?

-What’s your goal for the number of concurrent remote and local streams - transcoded or not?

Yeah, no single source media version can adequately cover both high bit-rate local clients and remote clients operating under a severe Internet bandwidth constraint anywhere along the way especially with multiple remote clients involved. Assuming you’re determined to stream across that kind of bandwidth constraint your choices are either on-the-fly transcode (there, I said it :blush:) or PMS’ Optimize (transcode in advance).

As you noted, Optimize means additional disk storage but I don’t see the “thousands of dollars” involved since a number of name-brand 8 TB NAS disks can be had for $350 or less. How much do you use now?

Assuming truly limited upload bandwidth, a single recent vintage PC (your own setup?) could probably already fill it with transcoded streams. At that point, I don’t how adding another server or making the existing server faster, adding a GPU, etc. is going to have any effect on your already-saturated upload capacity. It just shifts the same resulting transcode output back onto the same choked path.

Use of DirectPlay insofar as possible (and that would include Optimize) improves playback for everyone. In some marginal situations, it could indeed allow an additional remote transcode that wouldn’t be possible otherwise. Are you doing DirectPlay with your local clients now?

I don’t use Plex Cloud myself but do understand how it basically works. (Where’s @nigelpb when you need him!) With Plex Cloud, you’d be bypassing your stated upload constraint entirely and therefore maybe even the need to transcode that high bit-rate media in the cloud in the first place. If so, that Plex Cloud concurrent transcode limit of three wouldn’t apply. But depending upon exactly how much upload bandwidth and transcoding capacity you have to work with, I’m not sure you’d get any more that the same three transcodes from your own PMS instance than you would from Plex Cloud transcodes. The downside of Plex Cloud is that it’s going to take a while to get a library of any significant size pushed into the cloud.

My own Internet bandwidth has a lot of growing room so I don’t normally have to worry much about transcoding for remote clients. But I don’t seen any advantage to be lost by first going with a high Passmark PC. I’m holding off on upgrading my PMS until I see exactly how HW acceleration settles. 4K/HEVC doesn’t really change things that much in terms of my basic strategy - DirectPlay whenever possible even if means a couple of extra disks to accommodate both a 4K and 1K version for the 4K (when I get some) and 1K clients respectively.

My internet upload speed is 20 Mbps (I just used 10 to be consistent with your last comment).
20 Mbps the fastest that I can get where I live.

If I could get a true gigabit internet connection, 1Gbps up/down, then I wouldn’t worry about transcoding.
But even with such a capable connection, I could see DirectPlaying 4K content bringing even a gigabit connection to its knees if there were enough consecutive streams.

I just want to get one thing clear, I prefer DirectPlay when possible. From within my own network, I have it set for DirectPlay… it’s much better that way, no compromising on quality or anything.

HDD Cost

As for the cost, I live in Canada and for us the 8TB WD Red drives (I haven’t trusted Seagate for years) come to about $450 each (after tax).
My NAS currently has 8 of those 8TB WD Red drives in it, so being able to store multiple copies of all of my content would require that I likely double my capacity.
My reason for saying that I would have to double my capacity is that storing “optimized” versions doesn’t just mean 1 lower bitrate/resolution version, it means several lower bitrate/resolution version for various devices; storage space can add up quickly :frowning:

Also, the “optimized” versions feels more like a Netflix thing to do… I don’t expect all of my content to be viewed very often, hence why the transcoding feels like a better fit.

Plex Cloud

My understanding of Plex Cloud is that you have to have your content hosted online (DropBox, Google Drive, etc…).
I think Plex Cloud it’s a nice idea for people who have small collections, but how much will it cost me monthly to store 30+ TB of content on the cloud.
Also, a Plex Cloud account only allows 3 consecutive transcodes at a time, that wouldn’t cut it.

#H/W Acceleration
I would love to see the Plex team get hardware acceleration working really well, but I don’t see that as a replacement for distributed transcoding.
Horizontal scaling is the industry standard way of growing any system which has large load or the potential for large load.
Vertical scaling is normally seen in immature systems which were not well thought out upon creation or never anticipated such load.

I'm not raging on the Plex team, they have built a phenomenal system and they built it out of passion; I admire them for that.

1 Like

Throwing hardware at a software issue. Seems reasonable.
I would also like to see a distributed coding solution:

  • “Just get a bigger server” is not a solution for everybody -> cost.
  • “Just pre-encode everything” is not a solution for everybody -> space and cost.
  • Most poeple have a ton of other device (htpc/workstation/Rpi Cluster) laying around that are not used to full capacity all the time.
  • Being able to offload heavy work from low power NAS would make them more capable.
1 Like

I would love to see distributed trans-coding, but I think the solution to this may be better served as a round robin type solution.
With-in the Plex environment, they could option a mirrored server on other machines that with multiple port forwarding options within Plex can be redirected to another instance of PMS on the LAN. This then will evenly distribute each client request to the next machine in succession.

Ex.
PMS 1 option 1 PMS 2 option 2 PMS 3 option 3

PMS 1 Port forward priority 1,2,3
PMS 2 Port forward priority 2,3,1
PMS 3 Port forward priority 3,2,1

Each request goes up to the next server until all resources are exhausted.

This could even be promoted as a Plex Power User Package.
Multi-plex, Mega-Plex, Super-Plex??
Available to Plex-pass users.
I have been wanting an option like this forever…

What you just described (removing the ports stuff) is how typical simple load balancing works at present in web farms. The problem with a simple approach like this is user 1 goes to server A and direct plays. User 2 goes to server B and transcodes, user 3 goes to Server A and direct plays, user 4 goes to Server B and transcodes, etc

By dumb luck Server A got the direct play users and Server B got the transcode users so it’s really just as bad as running just one server. Also you would need a way to track watched status and keep the libraries in sync among the different servers.

To do this you really need to have/use a standalone multi-user relational database that all server instances would share. If we had this then we could load balance using the CPU % utilization instead of just a round robin technique.

Carlo

The idea was that the software would perform as a single account across multiple machines. Once in a “super user” mode. The additional forwarding ports on the machine(s) would identify as the next available server The above example showed that 9 clients can be served. Very scalable with proper hardware and bandwidth planning.

Things to consider:
Knowing what your machine(s) and or network can handle
Using similar hardware to ensure best inter compatibility.

It would be poor planning to deploy any solution without understanding these things.

I am for CPU % utilization, but I believe setting transcoding intervals would help that situation better.

I don’t think we are following what you are talking about with this port forwarding stuff? Changing inbound ports for Plex is off the table since it’s very rigid in the way ports works. Plex itself will only listen on inbound port 32400 and you can’t change this.

What you can do is put a transparent load balancer in front of your Plex servers and have it balance the load based on the CPU usage of your servers. You can do this today but then you have to manage the servers to keep all content in sync and all your watched status and things in sync as well. It’s not an ideal way to horizontally scale at present.

But this thread is about distributing the encoding job and not really other ways to scale out our systems. One server can handle a lot of streams. It’s only when transcoding is needed that we run into problems.

It would be far better to have an ecosystem where the devices themselves could lend a hand in this transcoding. Clients like the Nvidia Shield TV make great clients but can also run the server itself (although limited) but have an onboard HW based Plex transcoder. If you had 2 Shields TVs in the house on Gigabtit Ethernet your Plex server could get at least 2 HW based encodes out of each of them for another 4 transcode streams.

Devices like the Shield TV, NUCs with PMP or those using minis as their playback device would be able to take advantage of this quickly and easily.

Think about this. Would you rather spend $80 US on a Roku that is limited to playback of h.264/h.265 or spend $180 for a Shield TV that can playback literally everything you can through at it plus add 2 additional transcode streams to your server?

So not only would you probably never need to transcode to your own devices (hooked to TVs) but would also gain encoding power. It would make it much easier to build a more powerful server system that has low energy requirements.

Heck with a decent server that has an onboard GPU and a couple of Shield TVs/NUCs in the house most of us would have our transcoding covered.

Then of course you could add your other desktops/laptops into the mix as well if you have them available.

Carlo

@MattWeiler said:
[my own earlier comments snipped]

Never said that there were any zero cost/effort solutions; just some that might work. Anyone who’s already put 64 TB raw (but only 30 TB in use?) into Plex libraries has clearly made a substantial investment and is presumably prepared to make commensurately more to get something of value.

It seems as though you’re happy with PMS in regard to use by local clients today. Your complaint is all around handling remote clients?

20 Mbps upload isn’t the severe bottleneck I thought was being faced though I do know people with just 10 up. It is still rather thin for multiple (how many?) concurrent remote clients some of whom may have unrealistic expectations of what is possible given that constraint. It still seems that a single, high-end PMS server could fill that upload pipe but it wasn’t clear whether that was the actual bottleneck/problem you are trying to deal with. There was no mention of what you might have tried to limit or manage remote client bandwidth utilization given the current transcoding. I didn’t see any response to my point about how simply adding transcoding capacity not seeming able to change that outcome.

There was mention of not wanting to Optimize the entire 30TB library because it wasn’t all going to be watched remotely. Excellent point. Hypothetically, how much would be considered Optimize-worthy? I don’t know anything about the remote clients attempting to be serviced but creating multiple low resolution or low bit-rate copies is likely overkill and of minimal benefit. One smart choice can likely make all the remote users as happy as possible given the 20 Mbps upload constraint. Optimize isn’t perfect but if your actual target is something like a couple of hundred movies then it can certainly be tried on a small-scale first with little effort. I have no comeback on how Optimize “feels”. It either works for you or it doesn’t.

I’m no apologist for Plex Cloud but knowing now that you’d want to push a 30 TB library into the cloud at 20 Mbps makes it less than ideal regardless of the incremental costs of using it.

FWIW, if you’ve already got a pile of unused PCs that could reasonably do transcoding then I understand the perceived simplicity of harnessing them into a farm if the rather sophisticated tools necessary to do it were present. Just the option of a two-node, active/active PMS cluster coupled with iSCSI backend storage creates some very interesting scaling possibilities. Then again using a dual-processor workstation or small server might be almost as good at less cost. What if PMS was made 64-bit? Or maybe run a gently-used four-socket server? Not many of us will need anything like that unless HW transcoding is a total substantive failure. Spousal approval would be problematic in my case. (Rhetorical question follows: How many of us are in a situation where we’ve retired and retained so many apparently still-useful machines?)

PMS itself is not open-source I so I doubt anyone outside of Plex can get enough insights into it to make distributed transcoding work. Personally, I’m not into waiting indefinitely on Plex to provide additional solutions to a problem I can solve some other way now.

@dduke2104 said:

@MattWeiler said:
[my own earlier comments snipped]

Never said that there were any zero cost/effort solutions; just some that might work. Anyone who’s already put 64 TB raw (but only 30 TB in use?) into Plex libraries has clearly made a substantial investment and is presumably prepared to make commensurately more to get something of value.

It seems as though you’re happy with PMS in regard to use by local clients today. Your complaint is all around handling remote clients?

20 Mbps upload isn’t the severe bottleneck I thought was being faced though I do know people with just 10 up. It is still rather thin for multiple (how many?) concurrent remote clients some of whom may have unrealistic expectations of what is possible given that constraint. It still seems that a single, high-end PMS server could fill that upload pipe but it wasn’t clear whether that was the actual bottleneck/problem you are trying to deal with. There was no mention of what you might have tried to limit or manage remote client bandwidth utilization given the current transcoding. I didn’t see any response to my point about how simply adding transcoding capacity not seeming able to change that outcome.

There was mention of not wanting to Optimize the entire 30TB library because it wasn’t all going to be watched remotely. Excellent point. Hypothetically, how much would be considered Optimize-worthy? I don’t know anything about the remote clients attempting to be serviced but creating multiple low resolution or low bit-rate copies is likely overkill and of minimal benefit. One smart choice can likely make all the remote users as happy as possible given the 20 Mbps upload constraint. Optimize isn’t perfect but if your actual target is something like a couple of hundred movies then it can certainly be tried on a small-scale first with little effort. I have no comeback on how Optimize “feels”. It either works for you or it doesn’t.

I’m no apologist for Plex Cloud but knowing now that you’d want to push a 30 TB library into the cloud at 20 Mbps makes it less than ideal regardless of the incremental costs of using it.

FWIW, if you’ve already got a pile of unused PCs that could reasonably do transcoding then I understand the perceived simplicity of harnessing them into a farm if the rather sophisticated tools necessary to do it were present. Just the option of a two-node, active/active PMS cluster coupled with iSCSI backend storage creates some very interesting scaling possibilities but not one many of us need unless HW transcoding is a total substantive failure. (Rhetorical question follows: How many of us are in a situation where we’ve retired and retained so many apparently still-useful machines?)

PMS itself is not open-source I so I doubt anyone outside of Plex can get enough insights into it to make distributed transcoding work. Personally, I’m not into waiting indefinitely on Plex to provide additional solutions to a problem I can solve some other way now.

Yes, my PMS is capable of satisfying my local needs but not my remote ones and many other people are in the same boat as myself.

Distributed Transcoding can Solve Bandwidth Issues

Take my internet connection as an example, 20 Mbps up (the down doesn’t matter).
When I rip a Blu-ray, I rip the video at 4096 Kbps at a resolution of 1920 x 1080 (obviously the resolution doesn’t matter when it comes to file size).

Now, if I want to remotely play that video via Direct Play from a friends house, that will use approximately 1/5 of my available upload bandwidth; that’s fine, I can do that no problem.
Now, if my brother, my dad, my mother-in-law and 1 of my friends wants to watch a movie from my PMS at the same time (maybe Friday evening after work… it’s a popular time), well now I have a problem. My PMS is uploading at around 20 Mbps. Well, most people’s internet connections are not always going to give a consistent 100% reliable upload speed, so it’ll dip once-and-a-while and people will see buffering.

Now if we put distributed transcoding into the mix, we can say all remote playback must be transcoded to 2048 Kbps (or lower if you like). In my situation, doing this will double the number of concurrent streams available from my PMS.

Optimize Some Content

I don’t like the idea of guessing which content people on my PMS will want to watch, that just feels very kludgy and lazy.
My dad likes westerns, but nobody else on my PMS does, so

Plex Cloud

Again, Plex Cloud is a neat idea, but it doesn’t seem like it’s meant for users with collections larger than a few GBs.

Outside Help

@dduke2104 said:
I doubt anyone outside of Plex can get enough insight into it to make distributed transcoding work.

There is a project on GitHub which has does just what you said likely can’t be done.
github.com/wnielson/Plex-Remote-Transcoder

Complex Issue

I think it’s pretty inappropriate to insinuate that those of use asking for distributed transcoding would perceive it as a simplistic task. Distributing a computer system is not a simplistic task, but it is a very important piece of a well designed architecture.
Those of us who have actually worked on complex distributed computing systems know what it takes and understand that it is a very complex and often difficult issue to resolve.

Extra PCs

You’re absolutely correct that most people don’t have useful retired PCs lying around, but the fact is that many people, whom would be using a PMS setup, do have is several PCs in their home which are not always being actively used.
Some machines could be designated as transcoding slaves while you sleep or while you’re at work.
Not all of ones family/friends will have the same work schedule or timezone but they may still like to enjoy your content.
Before you go on a tangent about the load would likely be lower during those times, one could have most of their family elsewhere in the world (other timezone) and thus ones system may experience higher load when you’re at work or asleep.
Also, ones work schedule may be a night shift, so maybe ones friends and family could enjoy ones PMS while one is working.

It’s Enough Already

The point it that just because your usage of PMS is one way, that does not denote how everyone can/should use it.
People like you are so closed minded, it’s really unbelievable.
Now, like others have noted before; you’ve made your point and this feature is clearly not for you… but just because you can’t comprehend why others would want it, doesn’t mean you have to continually harass others with your opinion over-and-over-and-over.

At this point you’re just trolling and that’s not what this community is about.

1 Like

PMS itself is not open-source I so I doubt anyone outside of Plex can get enough insights into it to make distributed transcoding work. Personally, I’m not into waiting indefinitely on Plex to provide additional solutions to a problem I can solve some other way now.

PRT project has session based distributive transcoding mostly working. We have mentioned this a few dozen times now, you cannot just continue to cherry pick this entire thread, not read, and try to dictate to us what is real and what we want. Please stop.

1 Like

I’ll add another reason why this could become even more useful in the near future.

We’ve seen the new ABR (adjustable bitrate) being rolled out for mobile devices. This is coming for additional devices as well. This will allow the clients to manage their own bandwidths and adjust themselves.

Downside is this does away with Direct Play. Everything needs to be ran through the transcoder to achieve this. So down the road you would be able to have your friends set the client to “automatic” mode and they will make the best use of pipe size. 2 people could use the 20Mb, 3 people each around 7Mb, 4 people each with 5Mb etc.

So even if your media WAS direct playable it won’t be with ABR in a situation like this where you try to maximize your upload pipe.

Maybe you are like me with a large upload pip size but want to make it easy for the people you share with. You don’t want them to have to try original/direct play then bump it down when it fails or buffers, you want it to just work and decide to go with a default setting of ABR so the client always gets the best experience.

In either of these two cases, that means you need more power on the server to support it. HW and/or distributed encoding will be needed depending on the number of streams you need to support. If you have 10 to 15 streams you are going to need “external” help.

There are tons of reasons to have this feature and few if any reason not to have it.
Carlo

1 Like

@cayars ,
I see your direction, but I find myself staying away from that based on the quality of devices that are being demanded to provide the transcode/encodes. I prefer centralized compute power and resources as to truly know my transcode budget. As well as my network budget.

@cayars

Understanding PMS only utilizes port 32400 internally is fine, as what I suggest is on the external port.

The concept is that by showing multiple external ports forwarded to other LAN IPs but visible to PMS

EX.
PMS1 will see on remote page:
Server1: Private 192.168.1.33:32400/ Public x.x.x.x:39997/ internet
Additional drop down option,
Server2: Private 192.168.1.20:32400/ Public x.x.x.x:39998/ internet
Server3: Private 192.168.1.71:32400/ Public x.x.x.x:39999/ internet

Servers 2 and 3 are separate network target for request. But would be visible to Server1.

The opportunity I see here is that PMS can then be cloned to other machines. Allowing standalone/ or grouping.

By having PMS1 able to allow other machine(s) to act like the same server in the network. This will inherently allow one data base for metadata, as well as user information.

One could possibly see a platform that may not be distributed transcoding, but distributing the workloads across parallel hardware