After some talking with Chris C and doing some of my own research, he verified that the client is what controls the stream "get" from the server. The client controls the amount of video data (either in data size or timeline, not sure) and the server responds accordingly, regardless whether there is more transcoded data or not.
I propose to allow the clients to choose the type of streaming...
1) As is:
Clients snag data from the server as is currently implemented. Allowing for slower machines to transcode in chunks keeping the CPU available for other processes.
2) Custom:
Clients can snag/stream as much as set or can handle. Allowing for high powered dedicated servers to transcode the entire file at the initialization of the stream and the client to request larger chunks and eliminate any chance of buffering due to network transmission.
Reason:
The reason for this is that as this tech gets bigger and grows outside of the home (for example sharing with friends outside your home from your home network, or a dedicated server sitting in a data center), there may be a time when the current request from the client is resulting in too long of a delay from the server and ending up with buffering. Its purely a networking issue. Also the option would eliminate the more "spikey" network transfer and can result in a more normalized stream from the server. This could also allow the client to set a predetermined "buffer" time in which you can allow the transfer of the stream to buffer on the client and then begin play and keep the data transfer rate down for slower connections (keeping the resolution/quality high).
Nope it doesnt. It just controls how much time in the video the transcoder will transcode before it goes into sleep mode. So it transcodes chunks at a time... but the client is requesting specific amounts of time/data from the server regardless of what is already transcoded or available/not available.
So people with PC's or machines that cant transcode as well could stick with the current setup, having the client only grab small chunks at a time. People with powerful servers or dedicated PC's that can have an entire video transcoded fast, the client could grab a much larger chunk or a continuous stream.
I'm assuming the scenario you're talking about is one where the initial stream is started, but available bandwidth lowers between the server and the client at some point after that. The client keeps playing while not knowing that it's not going to receive the next part of the stream as quickly as it did before. It waits too long to request the next chunk and thus has to buffer. If I understand, you're proposing that the chunks be bigger/the video be fully pretranscoded so that the client doesn't have to make as many requests or so it can play longer while it continues to buffer the next portion.
In my understanding of video streaming, the file being fully transcoded at best makes no difference and at worst makes the problem worse. Let me explain.
The video stream itself is "chunked" by having bits/sec that fill the client's video buffer (http://mewiki.project357.com/wiki/X264_Encoding_Suggestions#VBV_Encoding offers a good explanation). Once the buffer is full, the network has to be able to continue filling it at roughly the rate it's emptying. If it can't, you get the buffering screen. Filling buffers where bandwidth is not a concern would actually be relatively easy with intraframe encoded videos (think a constant stream of single JPEGs), because they could just be stacked in, one after the other. However, intraframe videos require considerably more bandwidth and larger files overall and are impractical for streaming.
Most videos, then, use interframe encoding where some frames are encoded using commonalities in other frames in the stream (this is a major oversimplification of how the system works). This and other compression schemes lead to much smaller file sizes and data rates that are actually manageable on most networks. The problem for streaming is that certain video frames can't be decoded until the frames it depends on are also in the buffer. So encoders use a VBV to target a buffer of a specific size, trying to keep "complete chunks" of standalone video information within the amount of space allocated by a client's buffer.
Whether a video is completely transcoded or just chunked, if its stream was encoded expecting a larger buffer than is available on the client, you're going to have buffering issues at some point. We don't have this issue on Plex because our devs know their client's buffer sizes. This hopefully is a reasonable explanation for why fully transcoding at best makes no difference. The stream is the stream, chunked files on the disk or not.
Now, where it might actually make it worse is a scenario where the available bandwidth between server and client decreases after the stream has started. In this case, you have a pretranscoded video file whose internal "chunks" will fit nicely in the buffer, but the network can't handle filling the buffer at the correct speed anymore, so the buffer empties before the next complete "chunk" can make it to the client. The bitrate of the video needs to change to account for the extra time it takes to get to the client. This is called adaptive bitrate streaming and it's what Netflix (et al.) uses to create continuous streams over changing bandwidth. There's various ways of doing it, but you definitely can't do it by pretranscoding a single version of an entire video. To my knowledge, this is not implemented in Plex so chunked video files will have roughly the same bitrates as a fully pretranscoded video. With this in mind, chunking actually offers a significant advantage because it doesn't take up as much disk space with temporary files.
If Plex DID want to implement adaptive streaming, theoretically, every time the server goes to transcode the next chunked video file, it could check the available bandwidth between client and server, and adjust the bitrate of that chunk accordingly. In this case, the smaller the chunk, the more often the server is adjusting for bandwidth changes, and the better continuous streaming performance overall. There's a lot of other considerations that are beyond the scope of this post for that to work, but in that case, chunking your temp files actually makes more sense.
In summary (sorry for the essay), you want the chunked files to be larger. This doesn't make much difference because the stream must be encoded to match the buffer of the client. To actually achieve what you want, you would have to increase the buffer size of the client, which is a software AND hardware consideration on the client side. At the same time, larger chunk sizes would mean more video transcoded for an amount of available bandwidth that may or may not still be available AND more harddrive space taken up by temporary files.
If I have completely misunderstood video streaming or made assumptions here that are false, I apologize and please let me know. Hopefully this clarifies more than confuses the topic.
The internal H264 frame decoder is not a consideration here, the clients video stack and the MPEG-TS system handles that for plex quite nicely :) However, besides that the wall-o-text crit is correct about the rest of the system.