I realize this is a longshot but right now my plex server is a quad socket system and it's barely keeping up with user load, looking at 8 socket systems and they are just astronomically expensive which got me thinking about the huge surplus of cheap dual socket blade servers
what if there was a way of scaling plex itself to have separate compute/transcode nodes instead of just having one giant server...
so for example, you have one box running plex itself, then several slaved boxes that simply get assigned transcode jobs from the master plex server?
so
Plex Server >
Transcode Server 1
Transcode Server 2
Transcode Server 3
etc as the first transcode server starts getting very busy (say >75% CPU) start sending jobs to the second and so on until we run out of available transcode servers?
like I said, I realize this is a long shot and maybe a handfull of users might even run into a scenario like this but it would definitely make plex a significantly more powerful platform for things like CCTV/MWTV replacement
Something I've been looking into/wanting as well. (There are a number of threads on this already. Search for "transcoding agents".)
I'm guessing it's something we'll have to roll on our own. A script mimicking plex's transcoder executable that would be responsible for handing off jobs to transcoding agents (remote or local ffmpeg "agents"), collecting the output from the various transcoding agents and sending progress reports/logging like plex's transcoder would have.
In addition to distributing transcode workloads to multiple PCs, it would free us from dependence on Plex's custom ffmpeg builds, allowing us to experiment with different builds of ffmpeg, giving us access to things like opencl acceleration, intel evansport support, etc.
This would be really handy too for NAS based builds - would be great to be able to actually use Plex on my drobo for when it can stream directly, and have it offload to my desktop for transcoding only when necessary.
In addition to distributing transcode workloads to multiple PCs, it would free us from dependence on Plex's custom ffmpeg builds, allowing us to experiment with different builds of ffmpeg, giving us access to things like opencl acceleration, intel evansport support, etc.
that would be neat, filling up a server with Teslas running OpenCL would be pretty sexy
I read some other threads but nothing seems to have come out of them... I wish I had coding abilities so I could try and tackle this myself but I don't even know where to start :(
Very smart idea. My Synology NAS somtimes struggles while transcoding while I have an iMac and 2 MacBookPro standing around. They really could help the poor NAS a little bit.
I think you've maybe made this request sound far more complicated than it needs to be, as you're essentially asking for transcoding agents.
My own situation is that Plex and its media are hosted on a Synology NAS, which has a low power draw but limited compute ability, so it would be great to be able to offload transcoding duties to a more powerful desktop when required (as well as issuing a WOL token if that machine is asleep).
Get the basics coded first: The ability to offload to a single system, and then add clustering and such down the line.
if you put it in one of the very accessible formats, it can be direct played by almost any client. MP4(H.264, AAC, AC3) really does solve most of this once, then you don't have to transcode it each time you watch it.
if you put it in one of the very accessible formats, it can be direct played by almost any client. MP4(H.264, AAC, AC3) really does solve most of this once, then you don't have to transcode it each time you watch it.
Unless you stream remotely at varying bit-rates. Then you'll need to re-encode multiple times at multiple bit-rates. Takes up far more storage, reduces flexibility, delays the time it would take to automatically import TV and transcode records, etc., and consumes far more power, since it'll likely be making transcodes that'll never be used.
if you put it in one of the very accessible formats, it can be direct played by almost any client. MP4(H.264, AAC, AC3) really does solve most of this once, then you don't have to transcode it each time you watch it.
this is more a concern for streaming remotely, where most clients will be transcoding but not all... I'd rather not re-encode all my media to the lowest bitrate necessary (which on things like cell phones can be very low)
Unless you stream remotely at varying bit-rates. Then you'll need to re-encode multiple times at multiple bit-rates. Takes up far more storage, reduces flexibility, delays the time it would take to automatically import TV and transcode records, etc., and consumes far more power, since it'll likely be making transcodes that'll never be used.
That depends on how many times you watch the same content. If you only watch content once, I would argue that it does not matter how efficient the transcoding would be. If you watch the same content multiple times, transcoding once does offer a lot of savings in power. Re-encoding to low bitrate versions actually takes far less time than others.
It can be convenient upfront of convenient later. It will not be as convenient for both options. At the very least, you should make sure that one of your clients can use the format stored on the server directly. Knowing the server will transcode every item for every client is just making far more work than you should.
That depends on how many times you watch the same content. If you only watch content once, I would argue that it does not matter how efficient the transcoding would be. If you watch the same content multiple times, transcoding once does offer a lot of savings in power. Re-encoding to low bitrate versions actually takes far less time than others.
This could be solved by Plex caching transcodes. (I believe there is a feature request for this already.) This only further strengthens the validity of this request.
This could be solved by Plex caching transcodes. (I believe there is a feature request for this already.) This only further strengthens the validity of this request.
I disagree. If I am going to transcode it at the outset, why not put it in a format that is likely useful. We are again trying to solve a problem for everyone that only exists for a small subset. It can be convenient and more processor-intensive or less convenient and less processor-intensive. Shifting the transcoding burden to another machine just seems to indicate that the server might be installed on the wrong machine.
I cannot tell you how much can be accomplished with a relatively low-end processor unless you are forcing transcoding of several streams simultaneously. This is a self-inflicted wound. Adding three machines to do transcoding will not make some setups any less of a bad idea.
Put the media in a format most likely to be used at the outset, permanently (MP4 with H.264, AAC and AC3 tends to just work on most clients)
Recognize the limitations of some clients.
Recognize your usage pattern may dictate different hardware.
If you know you are going to do most of have several simultaneous streams reading 720P content and transcoding it for remote smartphones on low bandwidth, you need some horsepower. Adding multiple computers is not as efficient as having one computer capable of the job. The computers do not have separate content stores for independent reads, etc. This is trying to apply complex technology to a relatively simple problem.
I disagree. If I am going to transcode it at the outset, why not put it in a format that is likely useful.
"Likely useful". For who? When? Where?
One of Plex's great features is that you don't have to pre-transcode the media you throw at it, no matter the client. I can watch it full resolution at home, or on my phone over crappy wifi while traveling.
As I said before, this pre-encoding will increase the latency from the time an episode has finished recording to the time it's available, and take up more space for the multiple versions you'll inevitably need. Have multiple tuners/sources dumping media in? Now we're going to have to distribute those pre-encodes over a number of machines to get it done in a reasonable amount of time, or wait it out. Or we could have just let the recordings drop, untouched, into Plex's library, and then allow its transcoder handle it, on-demand, in the way the product was envisioned -- yet, because of a specific usage pattern, at a specific time, even a well endowed Plex server sometimes just can't keep up. (It happens.)
Put the media in a format most likely to be used at the outset, permanently (MP4 with H.264, AAC and AC3 tends to just work on most clients)
Recognize the limitations of some clients.
Recognize your usage pattern may dictate different hardware.
Or, 4, ask Plex to create a framework that can easily be hooked to allow for 3rd parties to develop a distributed solution. (Not difficult at all. It's almost that way now.)
Adding multiple computers is not as efficient as having one computer capable of the job. The computers do not have separate content stores for independent reads, etc. This is trying to apply complex technology to a relatively simple problem.
Many households already have multiple computers, and with the recent uptake of tablets, many of them are collecting dust. Distributed transcoding isn't "complex technology". The main inefficiency is network overhead, but we're not talking about anything that a run-of-the-mill gigabit network couldn't handle. Hell, even doable over wireless, provided the source files aren't too huge. (Distributed transcoding of 1080i mpeg2 recordings, where distributed transcoding would be helpful for some -- quality deinterlacing is CPU intensive, would probably choke many wireless networks.)
Faster syncs before (or even while) you travel is another point for distributed transcoding.
It complicates the code base with features only in use by a very small minority of users. I have a script that loads the original rip (from legal purchased media) to the server and makes it immediately available. It immediately transcodes (HandBrakeCLI) the content to a format usable on all my Rokus, iPads, Chromecasts, PHT, and iPhones in the background. Upon completion of the transcode, it replaces the original file with the more portable file. This is all done with a low power AMD 605e processor in unRAID.
This avoids most future transcoding for local clients. If you want fast transcoding for multiple remote clients, you have to get more horsepower. Believing that older computers will be efficient in helping realtime transcoding is unrealistic. They often have hardware-assisted decoders but do not have the power to handle decoding and encoding in close to realtime.
People often overstate the benefits of parallel computing unless they have some experience with the specific advantages and disadvantages of that paradigm. Render farms work on high end equipment that is identical, based on very expensive storage architectures. Heterogenous projects (SETI@home) work because they use extremely simple calculations on huge data sets that are not time sensitive.
It complicates the code base with features only in use by a very small minority of users.
I think you under estimate the amount of use such a feature would see, but I do agree that, at this point, Plex shouldn't be the one developing this feature. That does not mean I agree with your general dismissal of distributed transcoding.
I have a script that loads the original rip (from legal purchased media) to the server and makes it immediately available.
Not necessarily for clients that require transcoding.
It immediately transcodes (HandBrakeCLI) the content to a format usable on all my Rokus, iPads, Chromecasts, PHT, and iPhones in the background. Upon completion of the transcode, it replaces the original file with the more portable file. This is all done with a low power AMD 605e processor in unRAID.
Exactly what I'd like to avoid. Transcoding everything, even if not required, without taking into account remote, low bandwidth, clients. (And worse yet, throwing away the original.)
Add the use of subtitles, and the strong desire to NOT permanently burn them in during a pre-encode (sometimes I want them, sometimes I don't, some family members need them, some don't.), and the usefulness of pre-encoding goes out the window for the vast majority of clients.
Believing that older computers will be efficient in helping realtime transcoding is unrealistic
Many PCs built in the last 3 years would be fine. Some better than others, but certainly not unrealistic to expect a great number of them to handle real time transcoding of, at a minimum, a single 1080i/p video.
People often overstate the benefits of parallel computing unless they have some experience with the specific advantages and disadvantages of that paradigm.
Distributed transcoding is a relatively common thing. Parallel computing and video encoding are two peas in a pod. A perfect match. We're not talking about encoding uncompressed 4k video here (requiring higher bandwidth interconnects), we're talking about distributing multiple transcode jobs over multiple PCs. I believe you're overstating the requirements and complexity of this request.
There are plenty of existing projects to leverage from. All Plex would need to do is make calls to the transcode binary (A modified FFMPEG) easier to hook (Currently we would need to replace the binary with another every time PMS is updated. It would be better to have an advanced "transcoder path" option.) -- from that point, third party developers could take the reins.
People often overstate the benefits of parallel computing unless they have some experience with the specific advantages and disadvantages of that paradigm. Render farms work on high end equipment that is identical, based on very expensive storage architectures. Heterogenous projects (SETI@home) work because they use extremely simple calculations on huge data sets that are not time sensitive.
you can get blade servers for very cheap on ebay with tons of i7 based xeons... believe me 8 blades totalling 16 hex core xeons will blow away pretty much any single box you can find
if you have 8 boxes available, each running one instance of the transcoder it will perform much better than one box running 8 instances of the transcoder... thats pretty simple scaling to me?
obviously I am not suggesting trying to run one instance of the transcoder across 8 boxes, that would not really help and just cause tons of overhead... this is more for scaling/capacity and the folks running Plex on NAS devices