If the array is the bottleneck, the network doesn't matter. There is no getting around this. Plex's transcoder doesn't handle the stream the way you think it does.
WOL latency isn't as big of an issue as you think.
A powerful multi socket machine is far more expensive than many cheap consumer PCs, and far less scalable.
You seem to have some personal vendetta here. Have compute clusters done you some horrible wrong in a dark back alley?
No, I've actually used them instead of theorizing about them. It was my job for two years. Compute clusters are very power-intensive and are best served when the answers (output) require a great deal of computation without a great deal of data. High computation, low data. Multiple power supplies on cheaper hardware will almost always pull more power than a single multicore installation. Without more expensive configurations, the idea of WOL becomes a non-starter because each new user would have to wait for the next available machine to boot.
It is hard to overcome the price/power/transcode capabilities of a single multicore AMD chip. You can spend much more for better performance. Unless you are talking about massive numbers of remote users, the idea of several cheap PCs will not scale in that way.
This just does not seem to be a good fit for the benefits of this kind of configuration. Even a very cheap four core AMD chip (605e) can do a good job of serving multiple clients transcoded simultaneously. With a slightly better, but not more expensive chip, you can do a significantly better job. I suggest you spend the effort on more direct issues:
transcode the files beforehand to direct play on your most used client setup
use enough disks in the NAS that you are not always hitting the same disks for different clients
put a separate, dedicated disk in the server itself (SSD if you have many clients) where the temp files are written, to alleviate the burden of having multiple clients pulling transcoded files simultaneously
put a cheap multicore (i.e., not a top of the line i7) chip in the server
This setup is not theoretical. It has worked nearly flawlessly for years with a very modest cost.
I have to agree with agregjones... Not that this feature isn't cool, but I think that as a use case it just makes more sense to put PMS on the PC directly instead instead of adding significant complexity and communication protocols to the transcode agent:
Concerning network issues, remote transcode agent or running PMS with data on a network share will puts the same demand on your home network
Having a dedicated transcode agent that uses WoL would yield only marginal power savings - modern PCs are very good at clocking down and idling between a mere 50 and 100W and having a single powerful machine with a 4-core CPU will have much better power consumption than 2-3 older PCs running a transcode agent (even if those PCs sleep afterwards)
The load balancer and remote transcoding agent adds significant complexity to the code. The concepts are simple, sure... But this isn't trivial as far as a code base, especially if we consider the feature where multiple transcode agents are tried in sequence based on current load
(Aside: I doubt WoL will work flawlessly out of the box; people are going to encounter compatibility issues in the same way that some routers don't play nice with plug and play port forwarding / UPnP)
*shrug* anyways I don't mean for this to come off as pointed, I just wanted to point out that this is viable alternative :)
A modern PC in S3 sleep (~5 watts) consumes considerably less power than the average idle PC (~50 to 90+ watts). A low power NAS and a sleeping PC consume much less than the average idle PMS server PC. It is even more pronounced when you consider the configuration is often a NAS and an idle PC.
Yes, the network is a weak link in this scenario, and a reasonable point of concern - especially the average home network.
Most of us are fully aware of the vertical alternative, as it's currently the only option. I imagine that's little consolation for those sitting with a shiney new NAS that doesn't do what they originally envisioned.
For me personally, it's less a power issue and more about scalability. I run a very robust PMS server, but I like the option of adding more nodes so I can crank up bandwidth constrained transcode quality. With x264/ffmpeg settings I prefer, the average core i7 can barely handle two 2Mbps 720p streams, coming from a 1080p source.
Anyway I would love to have this, it seems my NAS Synology DS412+ is doing fine with transcoding but with 1080p it lags 0.5s every 15s. I can't really give data as I am only doing some tests from time to time I always watch movies/series from my local network, I will travel again for work soon so I plan on using more the streaming and syncing features :)
Maybe I'm not understanding this request properly...so forgive me if I'm not...
But allowing a Transcoding agent seems to be no different than running another Plex computer when your NAS is not up to the Transcoding itself.
The logical solution would seem to be to run PMS on whatever machine you intend to run this Transcoding agent on and merely Map the drives from the NAS and bypass it's internal Plex server.
If I read it wrong and your expecting the client side to do the transcoding well I don't see how that works because the purpose of transcoding is to compensate for incompatibilities with the client device and to make the data less prone to Network Bottlenecking.
The logical solution would seem to be to run PMS on whatever machine you intend to run this Transcoding agent on and merely Map the drives from the NAS and bypass it's internal Plex server.
A key difference is its plurality. Transcoding agents. Allowing more than one transcoding agent for the few of us that would like it. The other important difference is that the agents could sleep in a very low power state (S3), only to be awaken when needed, while the PMS server on a lower power device could be in a much more responsive state. (It is possible to sleep Plex Media Servers, though I don't believe its officially supported, and my experience with it has been less than optimal, especially for local access.)
The WOL thing seems a bit overkill for me, I just want to be able to use my gaming pc for transcode, only when it's turned on. My PMS runs on a decent computer, but getting a bunch of stuff to my iPad for my vacation takes a long time.
Also the lan speed and disk speed shouldn't be an issue, maybe if you're connected over wifi; however it will probably still be faster then having a NAS do the transcoding. Converting a 1080p rip on a older i7 takes around 40 minutes (in my setup), that's around 4MB pr second ~40mbit, not a problem even on newer wifi networks, and certainly not a problem for a HD (even with 2-4 transcoders running).
So just a small local agent telling the PMS that it's ready to help transcoding if needed, whenever the PC is turned on.
I'm using a low powered linux machine with a celeron G540 for my PMS. It's capable of transcoding one 1080p stream. It's nice, if another transcode comes along, it just can't cope. This machine runs all the linux home server goodies you can think of. I also have an i7 iMac which whoops some serious computing ass. It would be ideal to run 2+ transcodes on. It is however, the worst machine one can think of to store lots of data and misses out as a server since... well macs aren't made to do that.
In my situation it's totally cost ineffective to make the celeron machine more powerful just for transcoding or make the mac run pms since the date isn't there. This would be a totally viable case for these transcoding agents.
what if, instead of some crazy asleep/awake elastic main-frame type setup, if you could run zencoder or encoding.com as a cloud-based extension, much like the cloud playback currently available to plexpass? That would let you offload encoding entirely, save for on-the-fly transcoding, and allow you to transcode entire libraries with the push of a button.
Your NAS box is weak. yet is a perfect fileserver. you wish it to do the tasks of the an i3 of transcoding, but it can’t.
forget all this off loading to another box malarkey. if you want capability at your fingertips. then you got to upgrade!
surely a 4th gen i3 underclocked with an SSD and all the drives attached to it in one box. would cure the transcoding AND be relatively cheaper to run than 2 boxes!
i’m running a 3rd gen i3 3220. I got 17 clients added to it, REMOTELY. never have a single moan from it or any of them.
I accept, that like the fridge, boiler, clocks and server. they’re on all the time.
I feel better that the LEDs everywhere help lower it!
what if, instead of some crazy asleep/awake elastic main-frame type setup, if you could run zencoder or encoding.com as a cloud-based extension, much like the cloud playback currently available to plexpass? That would let you offload encoding entirely, save for on-the-fly transcoding, and allow you to transcode entire libraries with the push of a button.
But would require you to push massive amounts of data to the cloud for transcoding, and not solve the immediate issue of not having enough horsepower for real-time, on the fly, transcoding.
forget all this off loading to another box malarkey. if you want capability at your fingertips. then you got to upgrade!
Restructuring the transcoding as this thread requests would likely open the door to third party transcoding agents -- allowing one to create an agent for Intel Quick Sync (another request from NAS users) and the like, without having to wait for Plex.
May be malarkey to you, but I think it's just good design.
i'm running a 3rd gen i3 3220. I got 17 clients added to it, REMOTELY. never have a single moan from it or any of them.
All 17, at the same time? Either the majority of your library doesn't need to transcode for these users, or I call BS.
Your Core i3 simply can't handle the "prefer quality" setting for much more than 2 clients, if transcoding HD to HD.
Restructuring the transcoding as this thread requests would likely open the door to third party transcoding agents -- allowing one to create an agent for Intel Quick Sync (another request from NAS users) and the like, without having to wait for Plex.
May be malarkey to you, but I think it's just good design.
Just seems a lot of work that Plex wouldn't want to do? It has PMS and PHT. Why would it want to setup slave PMT's [Plex Media Transcoders] that PMS can already do itself? I mean, even XBMC [KODI] just does direct play. It's not even interested in transcoding. The idea behind Plex is to keep it simple. This is Another link in the chain to go wrong IMO.
All 17, at the same time? Either the majority of your library doesn't need to transcode for these users, or I call BS.
Your Core i3 simply can't handle the "prefer quality" setting for much more than 2 clients, if transcoding HD to HD.
I never said at the same time bud. Most I've witnessed was 7 streams and 3 of them were direct play. 90% of my content is 720p and therefore the video doesn't normally need to be transcoded unless the client has set the quality lower. Of the video I have seen transcoded, I've seen more than 3 streams be transcoded and at least one being throttled. Whilst me and lowly 2820 NUC direct play over PHT.
Here's my most recent shot
Anyways,
The point I was making was that I purchased the kit I needed to do the job I required. Why waste on buying additional machines to run alongside the machine I already have to do a better job at that only needs one machine?
Just seems a lot of work that Plex wouldn't want to do? It has PMS and PHT. Why would it want to setup slave PMT's [Plex Media Transcoders] that PMS can already do itself? I mean, even XBMC [KODI] just does direct play. It's not even interested in transcoding. The idea behind Plex is to keep it simple. This is Another link in the chain to go wrong IMO.
The point I was making was that I purchased the kit I needed to do the job I required. Why waste on buying additional machines to run alongside the machine I already have to do a better job at that only needs one machine?
Many people already have additional machines. Laptops, desktops, etc. There is no reason they couldn't run a transcoding agent in the background.
It really isn't that complex. Those that don't want to run the transcoder on another, or even multiple, machine(s) wouldn't have to, but it would open up far more possibilities for those that do.
90% of my content is 720p and therefore the video doesn't normally need to be transcoded
And much of my library is 1080p, often at very high bitrates, some of them 1080i and 1080p mpeg2 that certainly need to be transcoded. Different folks, different strokes.
Yes, I could (and have in the past, before the quality issues with the transcoder were finally resolved) pre transcode with my own scripts -- but that defeats one of the main strengths of Plex.
I like the idea, my current server is only really capable of streaming to a few devices at the same time, the idea I could turn on my gaming rig and have it serve as a client to help out sounds very cool.
But then again AMD's AM1 and some Intel Celerons CPU TDP is reaching Atom territory, my AMD 5350 is drawing ~50w which is going to be similar to your quad core NAS.
Technically this is already doable / done when a shared library is used over DLNA -- tho it depends on if it would be more cost effective to transcode locally and push the raw file over the net, or transcode remotely and serve the transcoded stream by proxy on DLNA.... i wonder....
Technically this is already doable / done when a shared library is used over DLNA -- tho it depends on if it would be more cost effective to transcode locally and push the raw file over the net, or transcode remotely and serve the transcoded stream by proxy on DLNA.... i wonder....
You would need a very fat pipe to make cloud transcoding worth the effort. Unfortunately, not a reality for the vast majority of users today, but those on Google Fiber and the like could get some use out of it. Most others would hit the soft bandwidth limits imposed by their ISPs relatively quickly, if they even had a fast enough upstream to send the full sized files out to the remote transcoding service in first place.
You would need a very fat pipe to make cloud transcoding worth the effort. Unfortunately, not a reality for the vast majority of users today, but those on Google Fiber and the like could get some use out of it. Most others would hit the soft bandwidth limits imposed by their ISPs relatively quickly, if they even had a fast enough upstream to send the full sized files out to the remote transcoding service in first place.
Im not talking about Cloud. If you have a PMS locally and your friend has one at their house that is shared with you - it shows up when browsing on DLNA on your server.