Distributed Transcoding/Scaling

@cayars
I’d be more inclined to believe that the Plex team customized their PMS code to run in a container on an Azure instance.
Something along the lines of a Docker container.

@cayars said:
This was always a long time request of mine as well and I’ve mentioned it in the past a few times.

I’m not so interested in distributed encoding anymore as a way to move forward. Plex has released a couple of Betas of it’s hardware GPU encoding which seems like it would pave the way much easier in the future. Instead of distributed encoding I’d love to see Plex support 3 or 4 hardware encoders. I think it would be much easier to use the onboard i7 GPU and slap a couple of ATI graphic cards in the Plex server and call it a day. With the ability to support 3 or 4 high bitrate encodes per GPU or 5 to 7 1080 to 720 type encodes per GPU, this should make it much easier and far less convoluted way of bringing more transcoding power to the Plex server.

Don’t get me wrong. I would also love to see Plex have the ability to use a real relational database such as the free community based version of MySQL instead of SQLite as well. That would make it far easier to run a couple of Plex installation off the same database for additional scale out performance.

But in all honestly I don’t think distributed encoding is the way of the future,
Carlo

Sorry mate, we can´t all just slap a GPU into our servers, and then another when we need further capacity and another and another etc…

A lot of people are using small for factor machines (htpc).

I am using intel NUC´s for my home systems due mostly to their size and relative performance. I cannot add GPU´s full stop but I can fairly cheaply keep on adding intel NUC´s when i need them.

All of the commercial transcoders that I have worked with (Elemental, Rohzet, various Telestream products, Baton, unified streaming etc) all have distributed transcoders because it is simply the most efficient way to add additional capacity to your transcoding farm.

Adding configurations to make use of GPU´s is one thing, adding distributing transcoding is another, they are not mutually exclusive, we can have both.

1 Like

Isn’t that the fun of Cloud Computing? Trying to figure out how things work behind the scenes?

What works local often times doesn’t work well in the cloud and what works well in the cloud often times doesn’t work well local. :slight_smile:

We all know it’s not the same exact code that we run locally. It even uses different version numbers as well. A normal user that has both a local server and a cloud server can find many things done differently.

If you have a cloud server you can pull the logs and read them and see how some things are different. Much of it is in plain English (well geek English) if you feel the need to explore. It’s not a clandestine/covert operation. LOL

PS your logs will self destruct 2 minutes after closing.

@dduke2104 said:
I’m with @cayars on the matter of distributed transcoding being an unnecessary investment in time/effort by Plex when it can be handled adequately well now by use of HW transcoding (when that settles in) and/or higher core-count processors.

We don’t need to scale to Netflix levels. IMO, a reasoned choice of media formats resulting in DirectPlay under most circumstances coupled with a modern processor or GPU can meet about 99% of the situations that PMS gets used in.

you might not want to scale to netflix levels but you have no idea how other people want to use their systems.

1 Like

@cayars said:
Isn’t that the fun of Cloud Computing? Trying to figure out how things work behind the scenes?

What works local often times doesn’t work well in the cloud and what works well in the cloud often times doesn’t work well local. :slight_smile:

We all know it’s not the same exact code that we run locally. It even uses different version numbers as well. A normal user that has both a local server and a cloud server can find many things done differently.

If you have a cloud server you can pull the logs and read them and see how some things are different. Much of it is in plain English (well geek English) if you feel the need to explore. It’s not a clandestine/covert operation. LOL

PS your logs will self destruct 2 minutes after closing.

Now that I think about it more, since they only allow 3 consecutive transcodes per Plex Cloud account, they may not have any distributed transcoding logic at all. They may just be spinning-up a PMS in the cloud (within a Docker container or something) and have it perform its own transcodes.
I hope that this is not the case as that would dash my hopes of seeing distributed transcoding in PMS anytime soon.

However, as I mentioned earlier, if they are having issues with it, they could look to the GitHub project I mentioned earlier.
That project only currently supports Linux installs for both master and slaves, but it might be a good start.
I haven’t had a chance to look at the code for that project yet, but I am planning to see if it could be adapted for Windows installs.

@MattTwinkleToes said:

@cayars said:
This was always a long time request of mine as well and I’ve mentioned it in the past a few times.

I’m not so interested in distributed encoding anymore as a way to move forward. Plex has released a couple of Betas of it’s hardware GPU encoding which seems like it would pave the way much easier in the future. Instead of distributed encoding I’d love to see Plex support 3 or 4 hardware encoders. I think it would be much easier to use the onboard i7 GPU and slap a couple of ATI graphic cards in the Plex server and call it a day. With the ability to support 3 or 4 high bitrate encodes per GPU or 5 to 7 1080 to 720 type encodes per GPU, this should make it much easier and far less convoluted way of bringing more transcoding power to the Plex server.

Don’t get me wrong. I would also love to see Plex have the ability to use a real relational database such as the free community based version of MySQL instead of SQLite as well. That would make it far easier to run a couple of Plex installation off the same database for additional scale out performance.

But in all honestly I don’t think distributed encoding is the way of the future,
Carlo

Sorry mate, we can´t all just slap a GPU into our servers, and then another when we need further capacity and another and another etc…

A lot of people are using small for factor machines (htpc).

I am using intel NUC´s for my home systems due mostly to their size and relative performance. I cannot add GPU´s full stop but I can fairly cheaply keep on adding intel NUC´s when i need them.

All of the commercial transcoders that I have worked with (Elemental, Rohzet, various Telestream products, Baton, unified streaming etc) all have distributed transcoders because it is simply the most efficient way to add additional capacity to your transcoding farm.

Adding configurations to make use of GPU´s is one thing, adding distributing transcoding is another, they are not mutually exclusive, we can have both.

Depended on the NUC you most certainly can add an external GPU. But then again no one made you build a server in a cigar box either. :slight_smile: ← come on that’s funny.

I have no idea what Plex’s intentions are on the ability to scale out but I know if this was my code/business I would certainly limit you regardless of the hardware you have available. Could be number of users you can have, number of attached devices, number of simultaneous streams, max amount of bandwidth you can use via WAN or most likely a combination of all of them and more. I’d consider it a “personal” server and use above some limit(s) is no longer considered personal or what/how it was intended to be used.

@cayars said:

@MattTwinkleToes said:

@cayars said:
This was always a long time request of mine as well and I’ve mentioned it in the past a few times.

I’m not so interested in distributed encoding anymore as a way to move forward. Plex has released a couple of Betas of it’s hardware GPU encoding which seems like it would pave the way much easier in the future. Instead of distributed encoding I’d love to see Plex support 3 or 4 hardware encoders. I think it would be much easier to use the onboard i7 GPU and slap a couple of ATI graphic cards in the Plex server and call it a day. With the ability to support 3 or 4 high bitrate encodes per GPU or 5 to 7 1080 to 720 type encodes per GPU, this should make it much easier and far less convoluted way of bringing more transcoding power to the Plex server.

Don’t get me wrong. I would also love to see Plex have the ability to use a real relational database such as the free community based version of MySQL instead of SQLite as well. That would make it far easier to run a couple of Plex installation off the same database for additional scale out performance.

But in all honestly I don’t think distributed encoding is the way of the future,
Carlo

Sorry mate, we can´t all just slap a GPU into our servers, and then another when we need further capacity and another and another etc…

A lot of people are using small for factor machines (htpc).

I am using intel NUC´s for my home systems due mostly to their size and relative performance. I cannot add GPU´s full stop but I can fairly cheaply keep on adding intel NUC´s when i need them.

All of the commercial transcoders that I have worked with (Elemental, Rohzet, various Telestream products, Baton, unified streaming etc) all have distributed transcoders because it is simply the most efficient way to add additional capacity to your transcoding farm.

Adding configurations to make use of GPU´s is one thing, adding distributing transcoding is another, they are not mutually exclusive, we can have both.

Depended on the NUC you most certainly can add an external GPU. But then again no one made you build a server in a cigar box either. :slight_smile: ← come on that’s funny.

I have no idea what Plex’s intentions are on the ability to scale out but I know if this was my code/business I would certainly limit you regardless of the hardware you have available. Could be number of users you can have, number of attached devices, number of simultaneous streams, max amount of bandwidth you can use via WAN or most likely a combination of all of them and more. I’d consider it a “personal” server and use above some limit(s) is no longer considered personal or what/how it was intended to be used.

How then, am i supposed to fit a GPU to my cigar box? (which i love by the way).

I have 50TB of nas, a NUC with an I5 doing all the downloading etc…, a NUC i7 for plex, another nuc i7 as a hypervisor host with 4 decent spec VM´s running on it and various network devices all in a tiny little cupboard in my living room.

I have a small house and I can´t have loads of racks of stuff in it and when i need more xcoding capacity i want to be able to add another small PC or better still some older pc´s that are doing nothing at the moment.

1 Like

@cayars said:

@MattTwinkleToes said:

@cayars said:
This was always a long time request of mine as well and I’ve mentioned it in the past a few times.

I’m not so interested in distributed encoding anymore as a way to move forward. Plex has released a couple of Betas of it’s hardware GPU encoding which seems like it would pave the way much easier in the future. Instead of distributed encoding I’d love to see Plex support 3 or 4 hardware encoders. I think it would be much easier to use the onboard i7 GPU and slap a couple of ATI graphic cards in the Plex server and call it a day. With the ability to support 3 or 4 high bitrate encodes per GPU or 5 to 7 1080 to 720 type encodes per GPU, this should make it much easier and far less convoluted way of bringing more transcoding power to the Plex server.

Don’t get me wrong. I would also love to see Plex have the ability to use a real relational database such as the free community based version of MySQL instead of SQLite as well. That would make it far easier to run a couple of Plex installation off the same database for additional scale out performance.

But in all honestly I don’t think distributed encoding is the way of the future,
Carlo

Sorry mate, we can´t all just slap a GPU into our servers, and then another when we need further capacity and another and another etc…

A lot of people are using small for factor machines (htpc).

I am using intel NUC´s for my home systems due mostly to their size and relative performance. I cannot add GPU´s full stop but I can fairly cheaply keep on adding intel NUC´s when i need them.

All of the commercial transcoders that I have worked with (Elemental, Rohzet, various Telestream products, Baton, unified streaming etc) all have distributed transcoders because it is simply the most efficient way to add additional capacity to your transcoding farm.

Adding configurations to make use of GPU´s is one thing, adding distributing transcoding is another, they are not mutually exclusive, we can have both.

Depended on the NUC you most certainly can add an external GPU. But then again no one made you build a server in a cigar box either. :slight_smile: ← come on that’s funny.

I have no idea what Plex’s intentions are on the ability to scale out but I know if this was my code/business I would certainly limit you regardless of the hardware you have available. Could be number of users you can have, number of attached devices, number of simultaneous streams, max amount of bandwidth you can use via WAN or most likely a combination of all of them and more. I’d consider it a “personal” server and use above some limit(s) is no longer considered personal or what/how it was intended to be used.

That’s a terrible business model:
Limit your customers because they don't share your viewpoints.
I hope you don’t run your own business.

I’m just grateful that the guys who built Plex are very passionate about media and hopefully don’t want to limit what people can do with their media as you’re suggested business model would.

Based on the feedback that the Plex guys give in the forum, they seem like they really want to make PMS even more powerful and useful.

Have to agree with that, placing limits on it makes no sense. The only reason that I can think of is the piracy angle but that does not add up as plex does nothing illegal or immoral and those fools that are selling access to their servers are in the minority.

besides, Kodi is soaking up all the piracy flack for now, Plex is still quite respectable on the court of public opinion.

Plex is bordering on being capable as a professional OTT content distribution platform, I work in the TV broadcast industry and my home set up is better than most professional set ups.

It is possible that plex could step into the professional content world and launch a fork for that would deal with the problems of scale and accuracy but even that would be no good reason to limit users to a max concurrent stream count.

1 Like

@MattTwinkleToes said:
How then, am i supposed to fit a GPU to my cigar box? (which i love by the way).
You don’t add it in the cigar box but onside it. Google Thunderbolt GPU or GPU docking, etc and you’ll see a couple of different ways this is done.

@MattTwinkleToes said:
That’s a terrible business model:
Limit your customers because they don't share your viewpoints.
I hope you don’t run your own business.

The way I look at it is if I’m giving you my software to use then I can set any limits I see fit. I would not want you or anyone else exploiting my good will building out a monster system using it like Netflix if that was not my intention for allowing you to use it. That would be my decision and if you didn’t like it then you could use some different software which of course would be your decision. It’s fine we disagree on this point. Just be glad I’m not in charge. LOL But let’s not worry about this and side track the conversation further since there is much other good stuff in here.

I find your view point very difficult to understand, I´m not really sure where you are coming from.

I’ll be honest and say I’d be greedy and charge more for those pushing the system harder or serving more people. Put another way the guy streaming to 15 simultaneous people might pay twice as much (or a couple more bucks a month) as the guy who only streams to 5 simultaneous people. To me that is fair. Honestly lets not get caught up on what I said I’d do because it doesn’t matter. This isn’t the Plex model.

Now back to our regularly scheduled transcoding talks.

I didn’t intend to hijack a thread regarding a feature request.

My point was simply that you shouldn’t be asking about extra ability for PMS to transcode without understanding why you need not transcode in the first place. DirectPlay is a beautiful thing that allows ordinary PCs running PMS to handle an average family’s needs in its current form “out of the box”.

@dduke2104 said:
I didn’t intend to hijack a thread regarding a feature request.

My point was simply that you shouldn’t be asking about extra ability for PMS to transcode without understanding why you need not transcode in the first place. DirectPlay is a beautiful thing that allows ordinary PCs running PMS to handle an average family’s needs in it’s current form “out of the box”.

You clearly haven’t read and understood our above stated reasons.

Now to be blunt, you’ve made your point, that you don’t have any want/need for this feature… that’s fine, don’t LIKE the original post.
Leave this thread for people who do want this feature to add to the original request.

To Reiterate

There are situations where transcoding is necessary.
Some people enjoy streaming remotely even if their internet upload speed is not very capable.
Furthermore, many of us want to be able to share our collections with family/friends (regardless of your opinions on the matter).

Putting that together with potentially limited upload speeds, distributed transcoding can be the only solution.

1 Like

I understand what you say you want. The reason I can’t understand the conclusion you’ve reached is that I can’t tell what problem/shortcoming you’re really trying to overcome. I just thought that maybe there’s already a solution for it, that’s all. :neutral:

I’m going back to sharing my collection across the Internet with family/friends now. :slight_smile:

Not just “necessary” but sometimes a worthy trade off.

Yea, as an example. I keep multiple copies currently of my 4K and 1080 copies of movies/shows. I also keep all my media in a “direct play” format (mp4, h.264, AAC 2 channel + other audio) in my normal non Hi-Res libraries. I’m the poster child of direct play and even post and support scripts to help others get there media in the same format. This is because the amount of transcodes is limited on the average box the way Plex currently works.

So even if you think I’m a “hard core” direct play guy let me throw something out.
I’m nearly at 100TB of actual storage used (not raw numbers but actual storage) after formatting the drives and taking the parity drives out of the figure. That’s the size of my content used. If I were able to re-encode everything to h.265 using how ever much time was needed (even 20x real time) I could gain back 35 to 40% of my storage space over time. Not only that but i would also gain performance in my disk IO system. For example instead of having to read 10GB of file off my disks I only need to read 6TB of file which is going to be a faster disk read.

I’d get faster backups per file, faster retrieval, less bandwidth required when streaming to other h.265 devices. Smaller required cloud storage for backup, etc. It has many positives to doing so.

All great until you realize that decoding h.265 and re-encoding to h.264 is killer on the CPU and even one massive box won’t handle it well and will have a limit. Thus if there was a semi-simple way for me to drastically boost the number of these hard to do transcodes then I’d probably be much better off. I can get X more by using hardware transcoding, but what if I could employ the use of another “satellite” box. What if that box also had GPUs in it as well? I could then semi-easily have another box or two available to use when needed to help me stream my collection to friends and family. I may only need 5 to 10 total streams but because of the way I store my data I could greatly expand my personal system this way using off the shelf “walmart” equipment or stuff I already have on hand.

This is just one example and may not be a typical example but non-the-less is an example of a “bigger system”. We each have our own reasons for wanting this. But just because any one of us doesn’t currently have this need doesn’t mean we won’t in the future. As i stated I’m the poster child of direct play formats but even i could get great benefit from this.

What if I could make use of my Nvidia Sheilds to help with encoding content for my PC based Plex server. It can do HW transcoding itself and does when running Plex Server. What if I could make use of QuickSync on the NUCs I could deploy instead of Shield TVs?

This could allow us to use different kinds of “clients” around the house connected to our TVs that are low powered but always on and add additional transcodes to our main server. This is where I’d like to see the Plex eco-system expand to.

Just something to think about,
Carlo

1 Like

@dduke2104 said:
I understand what you say you want. The reason I can’t understand the conclusion you’ve reached is that I can’t tell what problem/shortcoming you’re really trying to overcome. I just thought that maybe there’s already a solution for it, that’s all. :neutral:

I’m going back to sharing my collection across the Internet with family/friends now. :slight_smile:

What don’t you understand about the upload bandwidth not being capable enough to handle multiple DirectPlay streams at one time?
As such, at least some of those streams will have to be transcoded on the fly (since storing pre-transcoded streams is nonsense and would waste far too much storage space).
If there are enough streams that need to be transcoded, the CPU(s) and GPU(s) on many peoples NAS will reach it’s abilities and become the bottleneck.

If this doesn’t make sense to you, then I think it’s you who doesn’t understand why distributed transcoding is a very important feature and is needed by so many people.

1 Like

As time rolls by, more and more devices will support h.265. A quality h.265 encode can take anywhere from 30% to 50% less bandwidth since it’s so efficient. (High Efficiency video codec).

If you only have 10Mb or 20Mb upload Internet bandwidth then every bit counts. If the device on the other side of the WAN can support h.265 and you are on a limited upload bandwidth you would want to use h.265 over h.264 if at all possible. It could “magically” cut down your needed upload bandwidth from 30 to 50% which is HUGE for some people.

Hell, you could demand they buy X device which support h.265 before you share with them. But in our current environment your media will either be in h.264 or h.265 most likely and doing the transition/migration of h.264 to h.265 any type of distributed encoding will be super useful for obvious reasons.

@cayars said:
As time rolls by, more and more devices will support h.265. A quality h.265 encode can take anywhere from 30% to 50% less bandwidth since it’s so efficient. (High Efficiency video codec).

If you only have 10Mb or 20Mb upload Internet bandwidth then every bit counts. If the device on the other side of the WAN can support h.265 and you are on a limited upload bandwidth you would want to use h.265 over h.264 if at all possible. It could “magically” cut down your needed upload bandwidth from 30 to 50% which is HUGE for some people.

Hell, you could demand they buy X device which support h.265 before you share with them. But in our current environment your media will either be in h.264 or h.265 most likely and doing the transition/migration of h.264 to h.265 any type of distributed encoding will be super useful for obvious reasons.

I agree, h.265 should be used going forward.

All new media that I rip and put on my PMS I do so in h.265 format.
But to go back and re-encode all of my h.264 content would take ages.

I agree that it’s fair to assume that anyone wanting to watch content on my PMS should be able to play h.265.

Even with my media in h.265 format, if enough people want to watch at the same time, distributed transcoding should be just another tool in our PMS arsenal.

1 Like

@MattWeiler said:

@cayars said:
Isn’t that the fun of Cloud Computing? Trying to figure out how things work behind the scenes?

What works local often times doesn’t work well in the cloud and what works well in the cloud often times doesn’t work well local. :slight_smile:

We all know it’s not the same exact code that we run locally. It even uses different version numbers as well. A normal user that has both a local server and a cloud server can find many things done differently.

If you have a cloud server you can pull the logs and read them and see how some things are different. Much of it is in plain English (well geek English) if you feel the need to explore. It’s not a clandestine/covert operation. LOL

PS your logs will self destruct 2 minutes after closing.

Now that I think about it more, since they only allow 3 consecutive transcodes per Plex Cloud account, they may not have any distributed transcoding logic at all. They may just be spinning-up a PMS in the cloud (within a Docker container or something) and have it perform its own transcodes.
I hope that this is not the case as that would dash my hopes of seeing distributed transcoding in PMS anytime soon.

However, as I mentioned earlier, if they are having issues with it, they could look to the GitHub project I mentioned earlier.
That project only currently supports Linux installs for both master and slaves, but it might be a good start.
I haven’t had a chance to look at the code for that project yet, but I am planning to see if it could be adapted for Windows installs.

I don’t think this would break the bank if true, it would still allow you to pipe full stream sessions to a secondary (or more) boxes. True distributive, which is what I would like to see, will share the transcode across your entire farm, whereas this would be sessions based, but still way better than what we have currently. Heck, if the requirement is all our add on boxes have to run ESXi or Hyper-v and plex just spawns an appropriate sized VM to do the xcode each time, ok, sign me up, at least I can scale that. PRT is essentially doing that now, sessions based xcodes, they just do not spawn a VM/vapp per xcode, so this could even be a better solution than PRT currently, as at least then they would right-size the containers and you would not have cpu wait going through the roof. Granted, real distributive would just be better all round, but I’ll take what I can get.