But at what cost? Complaining that pre-transcoding takes too much storage (and could be separated across multiple boxes already) falls a bit flat when you suggest buying an array of blade servers. I cannot picture a situation in which additional storage would be more expensive than 8 blades and 128 cores.
See my aforementioned reference to subtitles as just one example of where a transcode cluster could be very useful, and where pre-transcoding will not fill the need. Sure it would be great for Plex to support native subtitles on all platforms, but that's not going to happen any time soon, and is actually far more complex to implement than distributed transcoding.
As for an "array of blade servers", that only speaks to the scalability of the approach.
Also, you must recall that the source file exists in one place. Having 8 read threads at different places is a concurrency issue and will not result in the best performance.
I think it has been stated multiple times that only one thread needs to access a single file source, since it's assumed any single node in the cluster would be able to handle transcoding in real time. The only reason you'd distribute a single source over multiple is if you wanted to transcode that single file faster, for a sync job (a transcode that will be synced rather than streamed). An optional configuration.
If you're talking about bandwidth between the NAS, transcoding nodes, Plex Media server, and clients, then sure, it'll put a little more strain on that if it's not reconfigured -- but again, we're not talking about uncompressed video here. Gigabit would be plenty for the average user. There are certainly options for larger installations.
This does not take into account any IO. When you realistically talk about a 10-15 GB source (conservatively) over a gigabit network with spinning disk drives at each end, you have to have significant memory to keep the pipeline moving efficiently. Since there is still only one server during this time, all work still has to be sent back to the server to be transmitted to the remote client
This is not very different than many Plex installations today. The PMS pulls from a separate NAS, transcodes, and delivers to a separate client. In this case, we add one more step: The transcoding node pulls from the NAS, transcodes, delivers to PMS, which in turn delivers to the client.
Again, this doesn't need to happen any faster than real time. We can take 2 hours to transcode that 20GB source for a 2 hour movie, processing a small chunk at a time, as needed (after initial buffering), just as Plex's transcoder does it now. Easily doable without saturating a gigabit network.
It would appear that the audience to gain much from this would be very small. I have a NAS installation of Plex.
Are you saying you don't have a desktop or laptop sitting around that could act as transcoding agent, or simply that you wouldn't bother using it as a transcoding agent, even if the feature were available right now in Plex?
The audience is actually much larger than you think it is.
If someone wants to run down the theoretical benefits
You're over-thinking this one. This is not a very complex task, and the benefits are obvious. The most complex aspect is how to keep un-throttled sync transcodes from saturating the network -- but again, if your media isn't on the same machine as your Plex Server, that issue already for you exists today.