Buffer too small in remote clients?

Server Version#: 1.41.8
Player Version#: latest iOS, Samsung
<If providing server logs please do NOT turn on verbose logging, only debug logging should be enabled>

I have spent a few weeks trying to troubleshoot why 2 remote users using Samsung apps (via their home ISP) and myself occasionally with iOS (via cellular) get terrible buffering for all but the lowest bit rates from my server.

It took me a while to even narrow down where the problem was, but I think I finally understand whats going on, have a work around, and want to bring the issue to the attention of the plex staff and community.

To be brief, my appropriately configured and remotely available sever is on a symmetric 1 Gb fiber connection and my remote clients are on asymmetric 500/20 cable connections. I can run iperf3 from within their networks and get ~300 Mbps from the plex box in my network. I would think they should be able to direct stream a 10 Mbps movie based on this, but frequently they can’t.

When watching network activity when they start a stream, there’s a quick spike and then settles down to an inappropriately low rate and they buffer every few seconds:

If they rewind or restart the stream it sometimes will stop buffering and play completely smoothly with intermittent large network spikes:

Both of these are when playing a 10 Mbps file via direct play. The first is unwatchable. The second is as it should be.

Logs are relatively unrevealing to me:
yakfod_plex_examplelog.txt (560.3 KB)
-0237 start stream and buffer repeatedly with network activity like first picture
-0240 rewind to time 0, instant smooth stream with network activity like second picture

The iOS app occasionally exhibits the same symptoms when used over cellular.

Starting at the plex box and working my way out, I’ve systematically and scientifically tested/swapped/replaced hardware and software at every hop I can access between my plex box and my remote streamers TVs. The only thing I found is that while the remote users connections to their ISPs are fine, they have wifi mesh networks/repeaters that introduce some latency and packet loss.

It seems to me that the Samsung and new iOS clients are just very sensitive to this latency and packet loss due to the way they negotiate with the server, stream small chunks, wait for the ACK, and don’t seem to have a very large buffer if at all.

To test this I imaged LibreElec on a spare Raspberry Pi 5 and loaded PlexMod for Kodi on it, set the cache to 128 MB and the cache readrate to 7x, connected it to my streamers same ■■■■■■ wifi and poof problem 100% solved. They can direct stream 50+ Mbps content without issue now.

Tether the same Kodi box to my phone cellular connection and, again, works great at times when the new iOS client doesn’t perform well over cellular.

So for anyone having similar issues, wondering why remote clients should be able to stream based on available bandwidth, but can’t, and have network activity that looks suspiciously bandwidth limited, try increasing the client buffer/cache if you can.

My question to plex staff: Why are official clients failing in a rather common scenario of having poorly configured or congested wifi, particularly when other streaming apps (netflix, prime) on those same Samsung TVs work fine? Can the cache be increased, or can we have an advanced option, to increase the cache in these clients? Is it possible this is a bug?

Thanks for taking the time to read. Happy to discuss more if interested.

Iperf3 server on plex box on my network. Iperf client on my phone at remote network. 300 Mbps transfer from server to phone or approx 60% of their rated downlink.

Tracert between a client on their network and the plex box shows all hops < 50 ms except one or two hops in their network between wifi repeaters which is variable but can be 100-250 ms. Speed tests to ookla from their network show 0.1-0.5% packet loss when on wifi and zero when wired. They are typical users and notice no problems with their service.

I did not know about BRR, made the change on the plex box, and notice no difference.

Clearly not everyone is having this problem, but I am rather convinced that this is intrinsic to the current versions of plex and some negotiation between the client (at least iOS and Samsung) and the server at the start of a stream which results in either normal play back or impaired data streaming which manifests as buffering every few seconds and is solved by forcing the plex mod for kodi client to fill a modestly robust 128 MB buffer.

Streaming to these same clients on their same suboptimal networks worked fine when I was running server 1.32.7 on a synology NAS. Unfortunately I decided to upgrade to modern times (to use the new mobile clients more than anything else) and put 1.41.2-8 on a beelink n100 (which Chuck uses without issues) and it has had this issue ever since. The only reason I haven’t rolled back to 1.32 on the beelink is because I’d really like ability to use the mobile apps and hardware transcoded burned in subs.

Don’t take this as me pointing fingers. I appreciate the awesome software and help being provided. I can give make and give kodi boxes to my streamers for now, but ideally won’t need to long term!

Is that iperf3 forward or reverse speed ?

Distro & version ?

ps: You call my name and, like a :dog: , I’ll know it. :slight_smile:

@yakfod

I looked at your account on Plex.tv. Bad news:

Published No

Remote Access is not working.
This explains the 2 Mbps limit ( Plex Pass users are given 2 Mbps for INDIRECT )

The port forwarding (firewall) configuration in your modem/router needs fixing.

1 Like

Nope. Your logs indicate the playback is a transcode. Additionally the client explicitly requested not to direct play.
The logs show multi-second times before segments are given to the client. So it seems the transcode is not keeping up with the playback.

1 Like

So much attention. Thank you for all your help. Let me explain:

Server: 1.41.8-9834 bare metal ubuntu server 24.04 with kernel 6.11
Clients: iOS 2025.18.0, most up to date samsung (i can get this later if needed)

[edited to add]Iperf was forward. The client on my phone on the remote network initiated the transfer from the iperf server running on the plex box on my network.

Currently not published because, in an effort to troubleshoot this, I made a plex subdomain and stuck plex behind a reverse proxy (configured not to buffer). It didn’t make any difference, but I liked the setup, so left it. The plex server is fully accessible at https://plex.yakfod.net which is published to you through the plex server settings. I can undo this if it will help you troubleshoot.

I apologize that I was unclear about the examples provided. The two pics of network activity are a 10 mbps file being direct played to ios client on cellular.

The log is not from that exact time but another instance of streaming media to my phone via cellular where it buffered every few seconds and network activity looked like pic 1 (time 0237) then after rewinding played fine and the network activity looked like pic 2 (time 0240). It might not have been the same file and I apparently was transcoding and not direct playing.

Why would it have to wait for the transcoder? The N100 is a beast and I’m typically seeing it hardware transcode at ~5-7x to 720p and throttle within a few seconds. I have it set to throttle at 120 sec readahead I think.

Again, I genuinely appreciate all of your help and input here. I am willing to do whatever you ask to sort this out!

It’s fully accessible via a subdomain and reverse proxy. The subdomain is set in the server settings to be published by you. I set this up while troubleshooting this issue. It didn’t make a difference but I liked not having open ports so left it. The same problem occurred when remote access was turned on both with UPnP or when I manually port forwarded. I can revert to this if you (understandably) don’t trust my setup.

Will plex use the relay even if it is specifically disabled in the server settings?

The log and the pics are not of the exact same instance of the problem (which happens both with direct play and transcoding). Sorry for the confusion.

Why would it be waiting when the N100 hw transcodes way faster than 1x ? Typically seeing 5xish but obviously it depends on the media and what it is being transcoded to.

Would more logs be helpful?

Latency can have a big impact when transcoding. Three wifi repeaters are not helping. Have them connect to the main router and see if that helps. Also try playing something that doesn’t need a transcode and see if you get the same issues. If these help then it’s the latency.

The total response time for a segment includes the time to wait for the segment to be available and the time it takes to send to the client. I assumed from reading your comment that the latter was not the dominate factor. Looking at the logs further it’s clear it was not waiting on segments to be available. Sorry for believing you over distrusting you and only relying on the logs.

As MovieFan points out, latency can have a big impact especially if transcode segments are grabbed in different TCP sessions. After the initial startup, a direct play is a single TCP connection allowing the window to grow to large enough sizes the latency becomes a non-factor in the transfer speed. 100-250ms is pretty high but more concerning is the variability as that indicates either congestion or interference. Realistically, you should try as direct of a connection as possible, see what happens there, and then you can add more variables to see their impact.

Appreciate the input.

Same thing happens when direct playing or transcoding. Besides these users, it also happens when streaming via cellular to the ios app on my phone.

One user has one repeater. The other has a mesh network with three nodes. While certainly introducing some latency and in the case of the repeater, some packet loss, I would say this is a pretty typical, albeit suboptimal, home situation. It very likely is impairing streaming to some degree. But at the same time, these same users with the same networks can stream from all major streaming services and used to be able to stream plex from me before I updated [moved 1.32 server from docker on synology nas and updated to 1.41.8 bare metal on N100 mini computer].

I find it interesting that they still can stream from me if using plexmod for kodi which works great for both transcodes or high bitrate direct streams or infuse on mobile which only direct plays. It’s just the Samsung and iOS plex apps that seem to be problematic on their networks or cellular.

I’m not ruling out something stupid on my end, I just don’t know what I don’t know.

If you can provide server and client logs when direct playing and showing this behavior, that might reveal something. Right now it’s just all guessing.

This is from today, tried to play a 10.7 Mbps 1080p h264 file remotely to iOS client via cellular. At the time speedtest to ookla was 200 Mbps down and iperf3 from the plex box to a client on my phone was also 200 Mbps.

1520 - started the stream, buffered every few seconds, network activity spiked, then seemed limited to 11 Mbps
1524 - hit the backup 10s button a few times until it hit time 0, the stream immediately played smooth without buffering and network activity changed to intermittent large spikes
1526 - stopped play

Notes:
*192.168.1.6 is my reverse proxy
*1520 local time for me is 1920 UTC
*Only edits to the logs are to trim before 1520 and redact my name
*Exact same issue occurs when the reverse proxy is removed and remote access is active either via UPnP or port forward.

Pic 1: 1520-1524 after starting the stream


Pic 2: 1524-1526 after skipping backward to time zero

Logs:
Plex Media Server.log (448.3 KB)
log.txt (8.3 KB)
com.plexapp.plex 2025-06-26–12-33-58-144.log (48.9 KB)
player_com.plexapp.plex 2025-06-26–12-33-58-421.log (46.0 KB)

Quick update with more testing.

I spun up 1.32.7 on the N100 box since I know that use to work on my NAS, made a new test library with zero migration, and found an old iphone with the old plex app. Guess what - same problems. Soooo as you all probably already knew, this does not appear to be directly related to either plex server or the new clients. It looks like it’s me.

Either this is some weird incompatibility between ubuntu server 24.04 and the n100 or its my network hardware of which the only things I haven’t ruled out are my firewall and ONT. Either way it’s frustratingly sneaky because I have no other issues.

When I get some more time, I plan to bring 1.41.8 up on my NAS and see what happens. If it works, I’ll blame my the n100 box and try some other OS. Any recommendations for a tried and true OS for plex with the N100?

If it doesn’t work, I guess it’s time to update/replace my firewall - which is the one thing I really don’t want to do, so of course it’s going to be that.

Appreciate all your help. Sorry for pestering all of you while I ride the struggle bus. I’ll get this working eventually. Will keep you updated for the sake of future googlers riding that same bus. Homelabbing is such a great hobby.

It looks like it’s my network hardware. I brought up 1.32.7 on my NAS which I know used to work and now has the same issue.

Will close the loop after I figure this out. Again, thanks for the help. Sorry for wasting your time on what looks like a non-plex issue.

To close the loop, all my problems were indeed symptoms of a hardware failure on my end and had nothing to do with plex media server or the clients.

I tried resetting then re-imaging my firewall with no change. Tried replacing my firewall with an underpowered Opnsense VM. As soon as I did, plex worked fine. Would appear that a lightning strike which surged through coax months ago (before I had fiber) did not spare my firewall as I thought.

I don’t fully understand what died in the firewall such that everything except plex seemingly ran well, but it’s 10 years old and I think I’m past caring at this point.

Thanks again for all your patience and help. Now I get to shop for new firewall hardware!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.