Constant buffering

Server Version#: 1.25.5.5492
Player Version#: 7.27

Recently I have noticed that my clients are buffering a ton. Although there have not really been any changes on the server side. My clients seem to get a message that their connection to the server isnt fast enough, even when on the same LAN over a gigabit connection. So im not sure what the issue is but i don’t think its network related.

I do see a lot of “WARN - SLOW QUERY: It took 300.000000 ms to retrieve 1 items.” messages. But I don’t know why that would be. The linux machine has a WD Black that, when I have tested it, can read at more than 3GB/s

Would the slow query be the reason for the constant buffering? How can I debug this?

Plex Media Server Logs_2022-02-03_01-08-43.zip (2.9 MB)

web has an issue at the moment with excessive buffering. have not heard of it being fixed yet.

This tends to be on Apple TV client that I use most often and has been going on for at least a couple weeks.

@ChuckPa could you please take a look? It is so bad it’s making the sever borderline impossible to use. I haven’t been able to figure out any issue on my own so any advice would be very much appreciated :pray:

  1. Breathe :slight_smile: I was going to make my evening rounds.

  2. A little bit confusing of a host/container config regarding the media, please detail?
    – Debian host?
    – Docker container?
    – Mac-like /Volumes local file mount points?

  3. Playing media and ran off the end indicates the Analysis was not up to date.
    – Either played new media before being completed initially
    -or-
    – Media updated (replaced a different file of lesser bit rate / run length / smaller file.)

Feb 03, 2022 00:55:00.888 [0x7f59f0dd4b38] DEBUG - Activity: updated activity 2c7c50ce-26bd-4731-bcef-cee49b3dde56 - completed 66.4% - Media download by cupp.sandra@yahoo.com
Feb 03, 2022 00:55:00.935 [0x7f59f0dd4b38] DEBUG - Using X-Forwarded-For: 2600:1700:2f70:a30f:9de1:b829:6966:93a0 as remote address
Feb 03, 2022 00:55:00.935 [0x7f59e514ab38] DEBUG - Request: [172.20.0.1:39670 (WAN)] GET /statistics/bandwidth?timespan=6 (22 live) GZIP Signed-in Token (Treverr)
Feb 03, 2022 00:55:00.942 [0x7f59f0dd4b38] DEBUG - Completed: [172.20.0.1:39670] 200 GET /statistics/bandwidth?timespan=6 (22 live) GZIP 6ms 5193 bytes (pipelined: 550)
Feb 03, 2022 00:55:00.964 [0x7f59f0dd4b38] DEBUG - Using X-Forwarded-For: 2607:1c80:54:1:545a:af2e:ff72:c8e6 as remote address
Feb 03, 2022 00:55:00.964 [0x7f59e52bfb38] DEBUG - Request: [172.20.0.1:39670 (WAN)] GET /video/:/transcode/universal/session/1DDE600A-AC56-49B0-9199-A29F8FBFF679/base/00987.ts (22 live) GZIP Signed-in
Feb 03, 2022 00:55:00.964 [0x7f59e52bfb38] DEBUG - [Transcode/1DDE600A-AC56-49B0-9199-A29F8FBFF679] Asked for segment 987 from session.
Feb 03, 2022 00:55:00.964 [0x7f59e52bfb38] WARN - [Transcode/1DDE600A-AC56-49B0-9199-A29F8FBFF679] Transcode runner appears to have died.
Feb 03, 2022 00:55:01.035 [0x7f59f0dd4b38] DEBUG - Activity: updated activity a532ad30-c49e-48a2-a823-65c6b66b0688 - completed 95.0% - Media download by cupp.sandra@yahoo.com
Feb 03, 2022 00:55:01.064 [0x7f59e52bfb38] DEBUG - [Transcode/1DDE600A-AC56-49B0-9199-A29F8FBFF679] Sending back blank segment for 987, we overestimated the number of segments.
Feb 03, 2022 00:55:01.064 [0x7f59e52bfb38] DEBUG - [Transcode/1DDE600A-AC56-49B0-9199-A29F8FBFF679] Returning segment 987 from session
Feb 03, 2022 00:55:01.064 [0x7f59e52bfb38] DEBUG - Content-Length of /usr/lib/plexmediaserver/Resources/empty.ts is 188 (of total: 188).
Feb 03, 2022 00:55:01.064 [0x7f59e5e83b38] DEBUG - [Transcode/1DDE600A-AC56-49B0-9199-A29F8FBFF679] Streaming Resource: Estimated bandwidth for 4B1DA50E-17C8-4006-8446-23B7D8CBFDFD to be 8kbps over 2.64s
  1. Slow query -
    – fragmented DB
    – Busy DB (remember, media analysis / scanning is a hard hitter on the DB which is single-threaded access.
    – When is the last it was defragmented (optimized) ?
    – Physically , how large is com.plexapp.plugins.library.db, as well as its -wal and -shm ?
    – How much media, in total, is indexed (item count) ?

Thank you!

I moved this over from a Mac a while back and just kept the same mount points when I reconfigured it. This is running in the PMS docker.

version: "3.8"
services:
  plexmediaserver:
    image: plexinc/pms-docker:plexpass
    container_name: plexmediaserver
    restart: always
    ports:
      - "32400:32400"
    volumes:
      - /Volumes/All Media:/Volumes/All Media
      - ./data/plexmediaserver:/config
      - /dev/shm:/transcode
    environment:
      - TZ=America\Los_Angeles
      - PLEX_CLAIM=claim-xxx
      - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
      - NVIDIA_VISIBLE_DEVICES=all
      - CUDA_DRIVER_CAPABILITIES=compute,video,utility
    runtime: nvidia
    pid: host

These files have been around for a while, nothing too new. I think it definitely should have been up to date.

I optimized it today and I have it set to do that as part of maintenance. FWIW.
I checked and there wasn’t any other analysis / scanning activity that I saw. I can reproduce this again pretty frequently with more logs if that helps.

(Outside the docker container)
-rwxrwxr-x 1 trever trever 339M Feb 2 19:11 com.plexapp.plugins.library.db

(Inside the docker container)
-rwxrwxr-x 1 plex plex 339M Feb  3 03:23 com.plexapp.plugins.library.db

Thanks again!

The DB size is fine. A size less than 500 MB is normal and best.

This leads me to believe you have an i/o bottleneck somewhere.

First step is to verify there is I/O wait.

In a new terminal window, (size it to fit all devices)

sudo watch -n 1 iostat -m

Now perform a playback

You want to look at these values:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.08    0.00    3.06    2.49    0.00   94.37

This is my Xeon E5 NAS performing a RAID5 → RAID 6 volume reshaping so it’s clearly I/O bound.

You should not have wait times this high. Fractions of 1% is best.

%user will be based on processing load but remember, 100%/cpu core – see what’s bottlenecking. Are you jamming up on a single -threaded core operation?

%system - overhead getting your media and managing the i/o . needs to be low

%iowait - This will likely be high … which is why the SLOW QUERY (disk i/o jammed up)

Afterthought —

Where is ./data ? Normally that’s a full path Is this on the root file system?

Hmm yeah I did notice that looked high…

Every 1.0s: iostat -m                                                                                                                    server: Wed Feb  2 20:15:41 2022

Linux 5.13.0-27-generic (server)        02/02/2022      _x86_64_        (24 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.83    0.47    2.33   37.10    0.00   57.27

Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s    MB_read    MB_wrtn    MB_dscd
loop0             0.00         0.00         0.00         0.00          0          0          0
loop1             0.02         0.00         0.00         0.00         10          0          0
loop10            0.00         0.00         0.00         0.00          1          0          0
loop11            0.00         0.00         0.00         0.00          0          0          0
loop12            0.00         0.00         0.00         0.00          1          0          0
loop13            0.00         0.00         0.00         0.00          1          0          0
loop14            0.00         0.00         0.00         0.00          1          0          0
loop15            0.26         0.01         0.00         0.00        149          0          0
loop16            0.02         0.00         0.00         0.00          5          0          0
loop17            0.00         0.00         0.00         0.00          1          0          0
loop18            0.00         0.00         0.00         0.00          0          0          0
loop2             0.00         0.00         0.00         0.00          0          0          0
loop3             0.00         0.00         0.00         0.00          0          0          0
loop4             0.00         0.00         0.00         0.00          0          0          0
loop5             0.00         0.00         0.00         0.00          0          0          0
loop6             0.00         0.00         0.00         0.00          0          0          0
loop7             0.00         0.00         0.00         0.00          0          0          0
loop8             0.00         0.00         0.00         0.00          1          0          0
loop9             0.00         0.00         0.00         0.00          0          0          0
nvme0n1         290.34         1.50         2.71         0.00      24323      43788          0
sda               0.08         0.00         0.00         0.00          8          0          0
sdb             117.35        15.79         3.13         0.00     255265      50554          0
sdc               6.83         0.69         0.00         0.00      11137          2          0
sdd              42.98         4.95         3.39         0.00      79958      54726          0
sde              20.74         2.80         0.00         0.00      45319          1          0
sdf               2.81         0.25         0.00         0.00       4053          1          0
sdg              10.28         0.06         0.75         0.00        935      12103          0
sdh               0.21         0.01         0.00         0.00         84          0          0

The slow query comes from the DB location correct? Thats on nvme0n1 which is a WD Black NVMe drive so I am surprised thats what would be jamming it up but maybe there is something underlying causing this.

It is on the root drive:

/docker/plexmediaserver/data

That is totally unacceptable (as you well know)

sudo dd if=/dev/nvme0n1 of=/dev/null  bs=4M

That should suck it in at 2.7 GB/sec or better

[chuck@lizum ~.2000]$ sudo dd if=/dev/nvme0n1 of=/dev/null bs=4M count=20000
20000+0 records in
20000+0 records out
83886080000 bytes (84 GB, 78 GiB) copied, 33.962 s, 2.5 GB/s
[chuck@lizum ~.2001]$ 


I have an idea. I think you’ll kick yourself. Let me see your results.

Here ya go:

20000+0 records in
20000+0 records out
83886080000 bytes (84 GB, 78 GiB) copied, 55.4866 s, 1.5 GB/s

Be patient… Do NOT panic.

sudo systemctl start fstrim.service

It’s going to trim the SSD.

It might feel like it’s locking up… Watch the disk i/o light.

It’s going to release all the dirty file system pages back to the SSD free list so it can freely allocate versus copy/write.

Ok done! You know what… Would that also explain why I have missing GB’s of storage space… Like 500GB I couldn’t account for…?

WHAT? When’s the last fsck ran?

Look in /lost+found.

Do you see a bunch of files ?

No there is nothing in there, but for whatever reason I cant account for a large amount of SSD space (across 2 SSD’s when I moved from a Samsung SATA to this NVMe one)

For ref: disk usage - Missing ~900GB of hard hard drive space - Ask Ubuntu

Thats a good question- might need to do so. Probably been a while, at least manually.

Well,

  1. Boot the rescue USB
  2. take it to the shell mode
  3. fsck all the partitions (unmounted of course)

Before doing this, what does /etc/fstab look like?

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sdg2 during installation
UUID=33dc7666-f7ff-42ee-8f8b-71fae9db1d72 /               ext4    errors=remount-ro 0       1

# /boot/efi was on /dev/sdg1 during installation
UUID=0425-70B4  /boot/efi       vfat    umask=0077      0       1
/swapfile                                 none            swap    sw              0       0

# Mount Drives
# <file system>        <mount point>  <type>    <options>             <dump>  <pass>
UUID=e97685d9-d13d-416e-ba8b-25ebf7afce23 /Volumes/Media\0404		ext4	defaults,nofail 0 0
UUID=f062d7fd-37f1-4d73-8c0a-9d116d360c56 /Volumes/Media\0403		ext4	defaults,nofail 0 0
UUID=31c264c2-ef31-45e0-a28b-0739571f86f8 /Volumes/Media\0402 		ext4 	defaults,nofail 0 0
UUID=2fdf80cd-851e-42fe-a4a4-b08f5000f501 /Volumes/Media 		ext4 	defaults,nofail 0 0
UUID=25886c7a-8df1-4cc3-be72-3201caad840d /Volumes/Storage    		ext4 	defaults,nofail 0 0
UUID=53c6805e-5d4f-4c3a-9054-dcfd1ee82345 /Volumes/Temporary		ext4	defaults,nofail 0 0
#UUID=8A52BB4D52BB3CAF			  /Volumes/Windows		ntfs	defaults,nofail 0 0
UUID=49186bbe-e0b2-4a5d-b4bb-2a88cda524a5 /Volumes/Data			ext4	defaults,nofail 0 0
/Volumes/Media* /Volumes/All\040Media/ fuse.mergerfs use_ino,cache.files=off,dropcacheonclose=true,allow_other,category.create=mfs,moveonenospc=true,minfreespace=25G,fsname=/dev/allmedia 0 0