My personal setup here at home is that i have a QNAP NAS that holds my content and runs the PMS. The NAS runs 24/7.
The NAS has a quad core atom CPU, which makes anything needing transcoding pretty much useless to me.
Meanwhile i have several computers in my home, that are on when used. These computers all have the processing power needed for transcoding.
How about developing a "transcoding agent" that can be installed on other machines and provide PMS with a sort of delegated transcoding functionality.
When none are available to PMS, it just does it itself, but otherwise it tries to delegate the work to an available agent (perhaps with a priority configuration)
This would greatly improve my experience with Plex, since it would allow me to quickly and easily jump over and watch something on my Apple TV, iPad, iPhone or whatever, which currently is something I avoid doing at all.
It's an interesting idea for sure. In generally though, when watching at home, your iOS devices should be able to play the content with a simple and cheap remux (assuming it's H.264), which even the ARM Drobo can manage a few streams of. If you've got old AVI content, time to convert it, maybe :)
Generally H264 in MKV containers. I don't know if it's doing a full transcode or what, but playing something from plex on my iOS device generally needs to buffer every second or so.
Here's an example of the media info listed in Plex:
MEDIA
Duration2:44:32
Bitrate10464 kbps
Width1920
Height1080
Aspect Ratio1.78
Video Resolution1080p
ContainerMKV
Video Frame Rate24p
FILE
AccessibleYes
ExistsYes
Duration2:44:32
File/share/MD0_DATA/movies/Dark Knight Rises, The (2012) 1080P/The Dark Knight Rises (2012) 1080P (tt1345836).mkv
Size12.03 GB
ContainerMKV
SUBTITLES
CodecSRT
LanguageEnglish
SUBTITLES
CodecSRT
LanguageDansk
VIDEO
CodecH264
Bitrate8954 kbps
LanguageEnglish
Bit Depth8
CABAC1
Chroma Subsampling4:2:0
Color Spaceyuv
Duration2:44:32
Frame Rate23.976 fps
Frame Rate Modecfr
Has Scaling Matrix0
Height1080
Level4.1
Profilehigh
Ref Frames4
Scan Typeprogressive
Width1920
AUDIO
CodecDCA
Channels5.1
Bitrate1509 kbps
LanguageEnglish
Bit Depth24
Bitrate ModeCBR
Duration2:44:32
Sampling Rate48000 Hz
SUBTITLES
LanguageEnglish
FormatSRT
I don't really want to convert my media. It would be extremely time consuming when we're talking a collection of 300 movies, and 1200 TV episodes in 720P/1080P.
I of course don't know the internals of how Plex works, but the agents themselves would be very simple.
It needs to listen on a port for incoming data from Plex. It receives the stream, and sends back a transcoded stream. Other than that it should only need to send a message to Plex on start up and shutdown to register/unregister itself, and it should send a keep-alive signal once in awhile.
I was thinking that this actually also opens up the ability for multiple simultaneous transcoding sessions, which would potentially speed up syncing a large amount of media by a multiplier of X (the number of agents you have available).
So the appeal isn't ONLY there for users running NAS. It can also improve the user experience for impatient people. :)
I'm running Plex Media Server on my server. What i do now is transcode all my Blu-rays with handbrake to a smaller file format. So i can store lot of movies. This would be awesome when this can be done automatically. Add functionality to add extra transcoding machines is also interesting!
This would be absolutely amazing. I hate running my PMS on a power-sucking i7 but I don’t want to switch to a NAS because of the transcoding limitations. My GTV can handle direct play but my phone needs transcoding. An agent like this would be wonderful. I’d be able to use the NAS as a PMS and then boot up an agent only if I go mobile with Plex.
The constant buffering the OP refers to sounds as if he made full BD rips. These could be up to 40 Mbps and are likely the culprit. That NAS will not be able to serve that file, even for another machine to transcode, reliably all the time. Any activity on the NAS other than a file read is going to hit the bandwidth limit on the drives, bus or network controller.
You can use a low-power NAS effectively, but you have to be willing to transcode the files down to something more reasonable (say, a 5-8 GB 1080P rip instead of 40 GB).
Here an agent would be of limited use because you would still have to pull the data off the NAS, transcode it on the secondary computer and send the transcoded portion back to the NAS. Since the massive IO is a large part of the problem, you would be making it worse by introducing the network. You would actually be better off running a secondary, high processing, low latency machine as the Plex Server. This would allow the transcoding and serving to stay on one machine while the NAS only served files.
I do not agree. If I'm not wrong, when transcoding, you do not need to provide the same bitrate as when playing. You can have very low bandwidth, it'll only take more time to transcode. That being said, I can confirm that currently, I'm storing all my media files, including BR rips, on a NAS and play them without a glitch on my Mac Mini connected over gigabit ethernet.
On the other hand, running two separate PMS doesn't solve the problem: you still have some content on the NAS PMS that will be transcoded on the NAS. Except if you put on the NAS content you know you'll never transcode, which, in my case, doesn't make sense.
Here an agent would be of limited use because you would still have to pull the data off the NAS, transcode it on the secondary computer and send the transcoded portion back to the NAS. Since the massive IO is a large part of the problem, you would be making it worse by introducing the network. You would actually be better off running a secondary, high processing, low latency machine as the Plex Server. This would allow the transcoding and serving to stay on one machine while the NAS only served files.
Provided the "agent" is connected to the NAS via gigabit, this shouldn't be a huge issue. The agent(s) could also be allowed to sleep, saving power. For larger installations, serving many remote clients, 10Gb or multiple 1Gb connections could be used.
The idea here is scalability -- and now that Plex supports sharing with multiple users, scalability is an issue for some, and will be a growing issue in general.
It would be also be great for a single transcode instance to be distributed over multiple agents, allowing a sync transcode to be completed in very short order. But this gets far more complicated to implement for many reasons, and is beyond the scope of this feature request.
Gigabit is irrelevant if the hard drive or bus speed is slower.
I am not arguing that a faster process would not do a faster job transcoding. I am arguing that it is not efficient to transcode on a machine and then have to store it back on the NAS.
content on NAS -> transcoded on other machine -> stored on NAS to stream to clients
I am suggesting that the simpler solution for this is to have the Plex server on the faster machine:
content on NAS -> transcoded and served on the faster Plex server
The much better solution is to trancode your files outside of Plex in several different formats (if you have any idea of what your clients can use with the actual bandwidth available to them) and just store them on the NAS. This can be easily scripted (I know, I have done it in the common format I use) and can run independently of the NAS. A NAS is best at serving content. My unRAID NAS with Plex installed now serves almost everything to my clients in a way that they can direct play. If I added a very small transcode for mobile/remote, it would never transcode at all. This would use the resources much more effectively than trying to build Plex as a network of transcoding agents. If you are cost-sensitive, this requires the transcoding machine only when new content is added. And the NAS ends up doing what it does best: serve files.
With the number of users you are referencing, this sounds like you are using Plex well beyond its design. Let's not make Plex needlessly complicated when a simple solution exists.
I'm running Plex Media Server on my server. What i do now is transcode all my Blu-rays with handbrake to a smaller file format. So i can store lot of movies. This would be awesome when this can be done automatically. Add functionality to add extra transcoding machines is also interesting!
I wrote a script years ago that scans my unRAID NAS for any rips that have yet to be transcoded (into a Roku and iOS-friendly format) and does it in the background. The script could easily reside on a separate machine if the NAS didn't have the processing power. Since most of my files are direct-play-capable after this process, I rarely have a HandBrake transcode going at the same time as a Plex transcode. So I don't care how long they take.
To lay out this configuration, it would solve multiple issues all in one in regards to multi server configurations and coordinating it all. You have one 'main' server. This server is required to be on for the others to function (or others could act like independant ones as well) and is the taskmaster for the others. It keeps in its cache a complete listing of media across all devices. It also has a listing of transcoding limits per server. So If I have an atom NAS mainserver and an i7 desktop subserver I can set the limits to 1(trancode) and 3(stream). Lets say I have one transcode going currently and I try to get a second transcode of the jungle book running. It checks the library of the i7, sees that it contains that movie and redirects the client to connect to that secondary server. That secondary server then handles the transcode and play session from thereon out.
This solves three current shortcomings of plex all in one. First, it allows for multiple servers seamlessly, no longer would you have duplicate media folders when you navigate to plex/web for example. Second, it allows for a basic QoS in that you can limit the number of transcodes occuring on a server at the same time which improves viewing experience for current watchers. As an example of this second feature, if you have a limit of 3 and a 4th tries connecting, it blocks the attempt as if that 4th person started trying to watch, it would cause stuttering for the other 3 watchers. Third, it solves the issue of low power NAS's as media servers because they could become merely directors for the more power hungry transcode servers. For this third part it could send a wake on LAN packet to wake them up and otherwise manage the servers underneath it.
In addition to those, this can tie into the QoS link I posted above as making more of a managed server solution that allows for more fine tuning and advanced configurations to better suit more people configurations for significantly reduced hardware costs. Instead of needing an i7 server, it can enlist other devices in the house to make a local compute farm.
I realize I am making this sound simpler than it would be to code, but a man can dream :)
Gigabit is irrelevant if the hard drive or bus speed is slower.
I am not arguing that a faster process would not do a faster job transcoding. I am arguing that it is not efficient to transcode on a machine and then have to store it back on the NAS.
content on NAS -> transcoded on other machine -> stored on NAS to stream to clients
I am suggesting that the simpler solution for this is to have the Plex server on the faster machine:
content on NAS -> transcoded and served on the faster Plex server
[...]
With the number of users you are referencing, this sounds like you are using Plex well beyond its design. Let's not make Plex needlessly complicated when a simple solution exists.
If the hard drives are not up to the task, and network speed is irrelevant, as you say, then it doesn't matter if it's being transcoded locally or by a remote agent, so why not a remote agent?
I already do pre-transcode much of my library to multiple qualities, but it's something I'd like to stop doing. First, it pulls unnecessary power to pre-transcode titles I may not watch remotely, and second, some targets require subtitles to be burned in, and I'd prefer not pre-transcoding with subtitles burned in. There are many more reasons from other users, like underpowered NAS that work perfectly most of the time, but sometimes could use a little help, etc.
As for using it "well beyond the design"... What's wrong with that? How does having advanced settings for advanced users unnecessarily complicate the design?
Plex was designed with an on-demand transcoder. It does not "unnecessarily complicate the design" to make that transcoder more flexible, and accessible, for a multitude of use cases. Might be a low priority, but that's for the Plex team to decide.
For this to work, the computer running the transcoding agent would need to be online all the time... Why not just run Plex Media Server on the PC in the first place, and add the media from a network mount / mapped network drive?
For this to work, the computer running the transcoding agent would need to be online all the time.
No, the transcode agent could sleep. The transcode load balancer could send a WOL packet to the agents, and wait for them to awake (modern systems should wake up in ~2-3 seconds). Shouldn't be very difficult to implement.
Why not just run Plex Media Server on the PC in the first place, and add the media from a network mount / mapped network drive?
- You save power by not having a powerful CPU/System running all the time, instead just having a low power NAS.
- Your transcode agent(s) need be little more than linux running on a USB stick and a i5/i7 mini PC (no HD), but you could also make use of other PCs around the house that usually sit idle most of the time anyway -- provided they're hard-wired via gigabit, support WOL, and the OS they're running wakes up in a reasonable amount of time.
- You also increase scalability, and potentially add an option to increase transcode quality for remote bit-rate constrained transcodes even further than was just previously done in the last few PMS updates by allowing even deeper RC lookahead, motion estimation, etc. Much of x264's features are still held way back, even with the "hurt my cpu" setting (an experimental setting that could use some more tweaks), in order to keep transcodes fast enough.
Again, I disagree with most of the assertions. You would be better served by buying a better multi-core CPU that downshifts when not needed. This already works under the current design and is considerably less expensive than supporting multiple machines used on an as-needed basis.
It is both more complex and more expensive without any significant advantage in the actual processing of the files.
If the hard drives are not up to the task, and network speed is irrelevant, as you say, then it doesn't matter if it's being transcoded locally or by a remote agent, so why not a remote agent?
I already do pre-transcode much of my library to multiple qualities, but it's something I'd like to stop doing. First, it pulls unnecessary power to pre-transcode titles I may not watch remotely, and second, some targets require subtitles to be burned in, and I'd prefer not pre-transcoding with subtitles burned in. There are many more reasons from other users, like underpowered NAS that work perfectly most of the time, but sometimes could use a little help, etc.
As for using it "well beyond the design"... What's wrong with that? How does having advanced settings for advanced users unnecessarily complicate the design?
Plex was designed with an on-demand transcoder. It does not "unnecessarily complicate the design" to make that transcoder more flexible, and accessible, for a multitude of use cases. Might be a low priority, but that's for the Plex team to decide.
Actually, it does matter if the hard drives are a bottle neck: you'd be reading and writing from them in order to buffer the remote server's output instead of being able to handle it in-flight.
WOL is not as functional as you think when it comes to this kind of setup. The latency of waking would only be acceptable if you overbuilt the computer considerably.
There is a significant difference between having advanced settings and requesting that a significant porting of Plex be rewritten to accommodate and extremely small group of users. Adding complexity to code makes it more unstable and prone to bugs. When approaching something like this you have to weigh the potential benefits versus the potential latency you'd introduce. Adding multiple on-demand remote transcoders does very significantly completed the messaging structure. The Plex Server is available when it needs transcoding. Any other machine is not necessarily so. This would require fairly extensive messaging and prioritization in order to accommodate multiple remote transcoders.
If the array is the bottleneck, the network doesn't matter. There really is no getting around this. Plex's transcoder doesn't handle the stream the way you think it does. It's basically just an ffmpeg instance that reads in, usually indirectly from a spindle, and then writes out to the transcode cache drive either the transcoded file for sync, or chunks to be picked up and streamed. That ffmpeg instance can run on a remote machine without issue, provided the network bandwidth is available to handle its I/O. Storage I/O will be the same in either case.
WOL latency isn't as big of an issue as you think. "Overbuilding" isn't necessary.
Load balancing isn't as complicated as you think, nor is a load balancer that needs to wait for its nodes to wake. We are not talking about things being re-written -- just augmented.
A powerful multi-socket machine is far more expensive than a handful of cheap consumer PCs, and far less scalable if you don't ever intend to cluster it with another powerful multi-socket machine.
Plex is offered as a solution to people with under powered NAS. There appears to be a growing number of these NAS users that are disappointed in Plex transcode performance. Many selected a NAS for power concerns, and are not interested in having a more powerful PC running 24/7. An increasing number of them are asking for a solution that lets them keep PMS on their NAS, and doesn't require re-encoding their entire library multiple times. Allowing other PCs they may already have pick up the transcode slack for the short time it's needed seems a viable solution.
You appear to have a personal vendetta here. Has a compute cluster done you some horrible wrong in a dark alley?