Any ideas on settings for the transcoder to normalize the transfer? Make it more of a fluid transfer rather than spikey?
Thinking specifically to the "segmented transcoder timeout" and the "transcode default throttle buffer".
Would increasing the throttle buffer to a quite large amount allow it to write to the temp file for a longer amount of time, or would it transcode that amount and then append the temp file with the large chunk rather than appending live? What about the transcode segments... larger number so that there is a larger buffer in the temp file before the transcoder gets taken out of sloth mode?
Hi Corey, sorry to bring back an old thread but I was wondering if you ever made any headway on this?
I understand the issue you brought up (unlike, it seems, other people in this thread) and I have noticed the same thing with my server. Like you, I am running an enterprise grade dedicated server at a datacenter with plenty of bandwidth.
I have methodically tracked down some buffering problems that exist only on a handful of remote clients and I'm thoroughly convinced the bursting data transmission issue you brought up is the cause.
It seems to be related to the quality of their routed connection to the data center. I came to this conclusion because I have noticed that running a traceroute or ping to my PMS server from these affected client locations that they have more latency than I would like but more importantly there is inconsistency between the initial packet latency and subsequent data transfer. I am not well versed in networking, but it seems like getting the data initially flowing between my PMS server to these clients is difficult but once the initial connection has been made then the data flows "freely." I have replicated this behavior from the client location to other services as well like Amazon Video and Netflix, it seems to be a characteristic of their internet connection as it is not particular to just my server.
If I monitor the bandwidth usage on the server side while transcoding to these clients, I see the burst of data transfer that you experienced. My idea was perhaps due to my initial latency problem, these bursts cause the client to exhaust the data stream and the playback stuttering occurs-- if the data came through normalized as you said, would I still have the buffering problem?
I found that the throughput of the PMS to client connection is plenty high enough-- if I transfer a large file from my PMS server to the same client through another means (like sftp), I will see a fluid transfer on both server bandwidth monitoring tools and client-side, which maintains and far exceeds the bandwidth that is needed to transfer the transcoded file via plex. Monitoring the client-side connection to Netflix demonstrates the same thing, their internet connection is plenty "fast" enough to stream.
Just to be sure, I checked CPU usage, disk i/o, etc. while a transcoded plex transfer was occurring and the data transfer spikes are not caused by lack of server hardware capabilities-- the stream is barely using 2% cpu, memory usage is low, and disk i/o is very reasonable.
Anyways, if you made any headway or have any ideas I would greatly appreciate it. I have ran up against the same misunderstandings that you did in this thread-- people don't seem to understand my/our issue.
I thought of re-encoding my media so it could be direct streamed to these affected clients, but I really don't have the space or want to go through the trouble when it seems that the usual suspects (lack of bandwidth/throughput or cpu) are not causing their buffering problem-- but rather plex's method of segmented transcoding / data transfer to clients combined with poor "network consistency" from their location to internet services.
Thanks for your help and bringing this issue/oddity to light,