This sounds like a lot of effort for not a lot of return.
Personally, if you have a NAS, and a computer powerful enough to do the heavy lifting, why not mount the NAS from the server and transcode/sync from there when needed?
At what point do you consider how much you would earn if you paid yourself for the hours worth of trying to squeeze every ounce out of older hardware, and turn around to consider a simpler setup?
It would be interesting to see support for distributed multi-server rendering, however; something that supports something like XGrid (though that's an admittedly bad example), but not "run this server for indexing, this one for transcoding"
Would love this feature, still using Subsonic to stream media to about 35 users becouse of how the Plex transcoder works, I would benefit greatly from this kind of load balancing.
This would be a wonderful addition to the app. Server Filers are not meant to handle that kind of load. Having my beasty gaming rig to do the transcodeing would be much better.
I remember the times of doing distributed compiling with things like DISTCC, and my suggestion would be to develop a micro transcoding daemon with the following features only:
Listen on a port, receive transcoding requests and video data, and
Spit back transcoded media
Nothing more. Have some other machine be responsible for fetching the media from whatever local disk or network share, and pumping it to the available transcoders.
Then you can supply the central PMS with a list of IPs and ports where it can go look for transcoding daemons. I guess wake-on-LAN is pretty easy to do these days, one can borrow code from etherwake.
Now if you really wanna go Pro on this, you'll want to be able to send chunks of videos for transcoding, so that you can split a video in say 100 parts, then send a bunch of parts to each transcoder, get the transcoded chunks back, and then collect and assemble everything together. This would immediately bring auto load balancing, as faster transcoders would request more chunks to transcode.
This might be something you'll want to pitch to the ffmpeg team, though.
There are already distributed encoders available. Ripbot is a very popular one. Instead of going through all that hassle with Plex just use a encoder designed for that task.
Iâm gonna go ahead and throw my vote in for this feature. My Core 2 takes a bit of time to transcode an entire season to sync to my phone/ipad and I donât want to upgrade the Plex box or move it to another machine because of power considerations. As others have said, for rare tasks that require âheavy liftingâ, the transcoding could be offloaded to a more powerful machine if available. I envisioned an agent that checks in with the Plex server basically announcing its availability to transcode jobs. If one is available, great, if not, the Plex server takes on the task itself.
Just my .02
Also, falling in line with the above request, if Plex could use the GPU, it would also speed up the transcoding.
I think people are overstating what is needed for transcoding. Â I run Plex on my unRAID box with an AMD Athlon II X4 605e. Â That is a low power-consumption quad-core cheap CPU. Â It handles transcoding without issue for Roku clients. Â There is no unnecessary network overhead because the NAS is the server, only the lighter transcoded streams are on the network, when necessary. Â Often in the background, the server is transcoding from the original MKV files (MakeMKV) to mp4s (Roku-friendly) for new material. Â While it will not be setting any speed records, the live transcoding for Plex has never been an issue.
The only reason to go with a much stronger CPU would be if you were doing most of your transcoding before. Â Unless you put a high priority watching a movie (already transcoded) a few hours earlier on the day you bought it, there's no reason. Â I store the original MKV off of disc for immediate availability. Â A batch job transcodes it to mp4 overnight. For a new movie, there is no transcoding live unless I watch it the night I load it. Â After that, it can be direct played on my clients. Â For older material, the transcoding time is irrelevant because it is being streamed live.
As for sync transcoding, it would largely depend on how much you use it. Â I recently queued up six movies for a weekend trip on medium quality. Â They were transcoded and synced to an iPad in about 4 hours (while I slept). Â I cannot justify anything higher than medium quality on sync for mobile devices. Â You may feel differently.
That much more powerful CPU will only be useful in some very specific cases. Â You will end up with more headaches and a higher power bill because of it. Â Meanwhile, my lightly powered 605e will continue to churn away with lower cost, lower temperatures and a much longer more stable lifespan.
The idea of letting my server sleep or shutting it down regularly is completely out of line for me. Â Build it so it uses no more power than it has to have and can run stably. Â The convenience of Plex is vastly hampered by waiting a few minutes for the server to boot, spin up, etc.
I would love to have this feature. My older Mac Mini works great for direct play but isn't so quick at transcoding for sync. A distributed model would be welcome so I don't have to "plan ahead" or prioritize sync queues for material. Like others have stated, a more powerful computer, e.g., gaming rig, could temporarily do the heavy lifting to speed things up.
Perhaps I'm not understanding the question fully, but it seems to me this functionality already exists (at least, it does the way I'm using Plex).
My setup:
PMS running on my NAS that feeds my various streaming devices throughout the house via direct play.
When I need to transcode anything of significant size, I open a separate instance of PMS installed on my desktop PC. Â PMS on my desktop PC points to the media folders on the NAS. Â On my iPad I will see two instances of PMS (i.e. two "Movies" libraries, two "TV Shows" libraries, etc.). Â When I select media to sync, I do so from the PMS linked to my desktop PC, which is where all the transcoding takes place leaving my NAS processor free.
Yes, there is a tremendous amount of network traffic from the NAS to my desktop PC. Â I have a gigabit network, so it doesn't cause a problem. Â Doing this over WiFi might not be a good idea...
Perhaps I'm not understanding the question fully, but it seems to me this functionality already exists (at least, it does the way I'm using Plex).
This is asking for a way to do this where a concurrent write from both servers does not have a good chance of corrupting the DB, which is the case with your current setup.
If you don't mind, could you expand on this slightly? Â When you say a "concurrent write from both servers", I'm confused. The PMS running on the PC only writes transcoded data to the PC hard disk. No data is written back to the NAS. Â Is there a possibility of corrupting the PMS DB(s) in that setup?
If you don't mind, could you expand on this slightly? When you say a "concurrent write from both servers", I'm confused. The PMS running on the PC only writes transcoded data to the PC hard disk. No data is written back to the NAS. Is there a possibility of corrupting the PMS DB(s) in that setup?
Ok, then you have 2 libraries. This request is about having 1 library. i.e. you always run from your NAS, when your NAS realises it is outgunned it automatically offloads work to your pc, and your other pc, and another pc etc etc without you having to do anything.
I would enjoy seeing this because we sometimes have 5 1080p encodes running at once and everybody gets buffered and sync just basically refuses to run. I know my server is due for an upgrade but this would still be awesome to have for future upgrades.
The underlying issue is not having multiple servers but having a way to sync all DB easily and seamlessly. At some point, it would be cool to have the PMS DB hosted on the Cloud or an option to host it on storage services like Dropbox to let all the PMS consume the same DB or at least a common part.
And for those who legitimately thinks it's an overrated feature, I feel bad about having both a Mac Mini and a NAS at the same time being up and running. I need a NAS for its storage simplicity but it is not powerful enough in the case of transcoding or PlexSync (which is one of the reason I do not use it that much because transcodes take too much time).
Now, it IS a nice-to-have feature but I believe the Plex team has more important priorities ;)
I have only just stumbled across it, but I think I will be attempting to implement. Iâll start with an old i5 Ubuntu laptop I have lying around as a proof of concept, and if I get it working then Iâll invest in an Intel NUC or two as a transcode workhorse.
It would be nice to cluster PMS and allow secondary servers to handle transcoding. I donât care about network throughput, Iâm all Gigabit and anyone with a serious PMS should be too.
I would say that for the idea of syncing, individual encoder speed isnt really all that important. If you are just syncing 1 video, then todayâs performance is probably just fine. Itâs when you want to sync an entire series etc and you have a queue. By allowing other PMS on the network (that are optâd in) to encode, you can parallel the task. It would not be uncommon for many PMS users to have a number of computers on the network, probably a desktop or two wired up. They could run PMS as a cluster agent that isnât configured to advertise any libraries (itâs in stealth mode!) but be available for transcoding of sync items.
Iâm not suggesting chunking videos. Just creating an sync-transcode queue and allowing other PMS instances to read that list and elect to transcode an item for the âmasterâ PMS.
I think that PMS is evolving from a little media server for a family, to a multi-site media warehouse for extended friends and family. Transcoding is the weakpoint right now (followed by file IO) and so addressing this in some sort of âclusterâ would be ideal.
@jackandjohn said:
Personally, if you have a NAS, and a computer powerful enough to do the heavy lifting, why not mount the NAS from the server and transcode/sync from there when needed?
I tried that. All tv episodes becames separated tv series.
Idea behind external encoding is mostly power consumption, i.e. I donât want to get out from my bed during night or in the saturday morning, stomp over that freeeeeezing floor, and turn up computer then go back. 24/7 computer sucks a lot of power, but banana pi is silent and for sure invisible in energy bills. Every device connected via lan can play fullHD movies, but when i try to view anything via wifi I get information that my server is not powerful enough. I see couple of solutions: raspberry pi plex cluster, or unlock transcoding for ARM devices if they capable to do transcoding i.e. Banana Pi M3, qubieboard 4 etc.