1.23.2.4656 update makes PMS inaccessible until restart

Optimizing the database on a 2.7 GB/sec NVMe SSD is going to be quick.

That said, it doesn’t take 15 seconds for mine to optimize – therefore.

Time the entire task as best possible

  1. Empty Trash
  2. Clean Bundles
  3. Optimize database

Download the logs ZIP file and attach when done.

Used a stopwatch and it took around 20 seconds with all the clicks and prompts for those three tasks. The optimize database took just maybe about 5 seconds or so. Here is the Logs ZIP:

Logs.zip (3.1 MB)

@kevindd992002

How big is the DB file ?

what does ls -la show for the Databases directory?

what I’m seeing looks like it hasn’t been optimized. (7 am local time?)

Where is the DB file? I was under the impression that the database is located in different folders under the Plex data directory or am I wrong?

This is what I have for the Plex directory:

root@nuc:~# ls -la /home/plex/Library/Application\ Support/Plex\ Media\ Server/
total 52
drwxrwxr-x 11 plex plex 4096 Jul  7 02:06  .
drwxrwxr-x  3 plex plex 4096 Mar 16  2020  ..
drwxrwxr-x  6 plex plex 4096 Jul  7 14:06  Cache
drwxr-xr-x  5 plex plex 4096 Jun 15 06:25  Codecs
drwxrwxr-x 80 plex plex 4096 Jul  6 06:25 'Crash Reports'
drwxrwxr-x  2 plex plex 4096 Mar 20  2020  Diagnostics
drwxrwxr-x  3 plex plex 4096 Jul  7 12:30  Logs
drwxrwxr-x  3 plex plex 4096 Mar 16  2020  Media
drwxrwxr-x  5 plex plex 4096 Jun 25 01:17  Metadata
-rw-r--r--  1 plex plex    5 Jul  7 02:05  plexmediaserver.pid
drwxrwxr-x  3 plex plex 4096 Mar 16  2020  Plug-ins
drwxrwxr-x  7 plex plex 4096 Mar 16  2020 'Plug-in Support'
-rw-r--r--  1 plex plex 1626 Jul  7 02:06  Preferences.xml

I looked for a Databases folder and this is what I found:

root@nuc:~# ls -la /home/plex/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support
total 28
drwxrwxr-x  7 plex plex 4096 Mar 16  2020  .
drwxrwxr-x 11 plex plex 4096 Jul  7 02:06  ..
drwxrwxr-x 19 plex plex 4096 Dec  9  2020  Caches
drwxrwxr-x 19 plex plex 4096 Dec  9  2020  Data
drwxrwxr-x  2 plex plex 4096 Jul  7 02:05  Databases
drwxrwxr-x 10 plex plex 4096 Mar 16  2020 'Metadata Combination'
drwxrwxr-x  2 plex plex 4096 Mar 16  2020  Preferences
root@nuc:~# ls -la /home/plex/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/
total 1930068
drwxrwxr-x 2 plex plex      4096 Jul  7 02:05 .
drwxrwxr-x 7 plex plex      4096 Mar 16  2020 ..
-rw-rw-r-- 1 plex plex 223813632 Jul  6 06:25 com.plexapp.plugins.library.blobs.db
-rw-r--r-- 1 plex plex 223516672 Jun 27 01:00 com.plexapp.plugins.library.blobs.db-2021-06-27
-rw-r--r-- 1 plex plex 223568896 Jun 30 01:03 com.plexapp.plugins.library.blobs.db-2021-06-30
-rw-r--r-- 1 plex plex 223700992 Jul  3 01:01 com.plexapp.plugins.library.blobs.db-2021-07-03
-rw-r--r-- 1 plex plex 223813632 Jul  6 01:03 com.plexapp.plugins.library.blobs.db-2021-07-06
-rw-rw-r-- 1 plex plex     32768 Jul  7 12:29 com.plexapp.plugins.library.blobs.db-shm
-rw-rw-r-- 1 plex plex     58720 Jul  7 12:29 com.plexapp.plugins.library.blobs.db-wal
-rw-rw-r-- 1 plex plex 171055104 Jul  7 14:06 com.plexapp.plugins.library.db
-rw-r--r-- 1 plex plex 171531264 Jun 27 01:00 com.plexapp.plugins.library.db-2021-06-27
-rw-r--r-- 1 plex plex 172189696 Jun 30 01:03 com.plexapp.plugins.library.db-2021-06-30
-rw-r--r-- 1 plex plex 170891264 Jul  3 01:01 com.plexapp.plugins.library.db-2021-07-03
-rw-r--r-- 1 plex plex 170938368 Jul  6 01:03 com.plexapp.plugins.library.db-2021-07-06
-rw-rw-r-- 1 plex plex     32768 Jul  7 14:42 com.plexapp.plugins.library.db-shm
-rw-rw-r-- 1 plex plex   1202088 Jul  7 14:42 com.plexapp.plugins.library.db-wal
root@nuc:~# du -sh /home/plex/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/
1.9G    /home/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases/

Kevin,
The file com.plexapp.plugins.library.db is the actual database. The other ‘blobs’ database is for supporting info.

I was concerned the WAL and SHM files had grown in size but these are correct for a normal running system. (SQLite3 folds them into the DB file during normal shutdown). They are cache files.

Given your DB is 171 MB, on that i7 NUC, there is NO WAY for you to be getting the crappy DB performance unless it’s how you’re storing the metadata . You have a 10,000 Passmark CPU.

Is it on internal SSD or on an external USB drive ?

Exactly. This NUC 10 is a beast and I’m surprised too.

Everything is in the internal NVMe SSD except for /mnt/storage which is an internal 2.5" SATA SSD that contains only music files.

What’s the NUC firmware settings for performance?

Did you update the firmware when you got it?

Might it be set to Economy mode ?

I had to tweak my performance settings on the HadesCanyon after firmware update.

How about a quick performance benchmark?

  • sudo apt install sysbench

Now that NUC should clean this machine’s clock :wink:

[root@lizum chuck]# sysbench cpu --threads=8 run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 8
Initializing random number generator from current time


Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed:
    events per second:  8356.28

General statistics:
    total time:                          10.0010s
    total number of events:              83581

Latency (ms):
         min:                                    0.81
         avg:                                    0.96
         max:                                   41.12
         95th percentile:                        0.97
         sum:                                79978.97

Threads fairness:
    events (avg/stddev):           10447.6250/81.47
    execution time (avg/stddev):   9.9974/0.00

[root@lizum chuck]# 

I constantly update this NUC’s firmware. My last update was probably two months ago so it’s pretty recent if there’s already a newer firmware now (I have to check). I’m remote from the NUC so I can’t check the BIOS/UEFI settings but I always make it to a point to use settings that are for “performance” mode but I still have to check when I have the chance (next next weekend when I’m there).

However, here are my bench results:

root@nuc:~# sysbench cpu --threads=8 run
sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 8
Initializing random number generator from current time


Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed:
    events per second:  7806.54

General statistics:
    total time:                          10.0009s
    total number of events:              78083

Latency (ms):
         min:                                    0.79
         avg:                                    1.02
         max:                                    1.42
         95th percentile:                        1.21
         sum:                                79987.62

Threads fairness:
    events (avg/stddev):           9760.3750/1330.43
    execution time (avg/stddev):   9.9985/0.00

It looks like yours is still faster :slight_smile: Are those results from a NUC as well?

Yes,

NUC8-i7-HVK (i7-8809G CPU)
:slight_smile:

I would start checking your Firmware/BIOS settings. it doesn’t feel right.

Argh. Do you know which specific settings in the BIOS should I be checking?

Also, the performance difference we see isn’t that far off to warrant the SLOW query issue on my PMS, is it?

It looks like you’re running 5% slower but the CPU is 25% faster composite so unless you’ve got the clock spun up somewhat, that 1.1 Ghz base , and any latency before it jumps up, is going to bite you . My machine’s base clock is higher so I get the jump.

Still that doesn’t justify the nasty results.

When’s the last time you started fstrim.service ? :thinking:

Linux should be doing it – BUT – if it’s not, your I/O is going to tank.

If you’ve never run it before :scream:

It will appear to hang while it trims the SSD. This is NORMAL

lastly , you’ll probably want to enable it.
systemctl enable fstrim.service

1 Like

These are the specs difference of our CPU’s:

NUC10 (i7-10710U)

NUC8 (i7-8809G)
image

So yours has fewer cores/threads but higher base frequency. Mine has more cores/threads and higher max turbo frequency. I ran the bench again with 12 threads and here’s what I got:

root@nuc:~# sysbench cpu --threads=12 run
sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 12
Initializing random number generator from current time


Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed:
    events per second:  9515.01

General statistics:
    total time:                          10.0010s
    total number of events:              95173

Latency (ms):
         min:                                    1.11
         avg:                                    1.26
         max:                                   25.83
         95th percentile:                        1.30
         sum:                               119986.52

Threads fairness:
    events (avg/stddev):           7931.0833/36.96
    execution time (avg/stddev):   9.9989/0.00

Does that look about right to you?

As for fstrim.service, I have never run it. I thought Linux automatically runs trim like Windows does :confused: I also checked the status and it looks like it’s disabled:

root@nuc:~# systemctl status fstrim.service
● fstrim.service - Discard unused blocks on filesystems from /etc/fstab
   Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: en
   Active: inactive (dead)
     Docs: man:fstrim(8)

So for Linux installs, TRIM is something that you need to manually enable? Should I simply enable the service and the TRIM process will automatically take care of the SSD?

Your distro should have it enabled but ???

Manually enable it so you know the 30 day timer is running.
Now run it manually to force a trim.

After that, trim will take care of itself.

I’ve found, if I ever BEAT on the SSD really really hard (like load a full Plex system), trim it

Ok, I’ll read about this first and make sure trim runs on the SSD’s I have. After running trim, it’s just a matter of checking the Plex logs if they still show those slow query warnings, correct?

Yes, that’s the first step.

I would shut Plex down before running the trim – get everything.
Trim
then start.

It should haul.

For reference:

[chuck@lizum ~.509]$ sudo systemctl status fstrim.service
● fstrim.service - Discard unused blocks on filesystems from /etc/fstab
     Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static)
     Active: inactive (dead) since Tue 2021-07-06 12:06:52 EDT; 1 day 1h ago
TriggeredBy: ● fstrim.timer
       Docs: man:fstrim(8)
   Main PID: 590067 (code=exited, status=0/SUCCESS)
        CPU: 2.652s

Jul 06 12:04:34 lizum systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Jul 06 12:06:52 lizum fstrim[590067]: /backup: 144.4 GiB (155081256960 bytes) trimmed on /dev/nvme1n1p1
Jul 06 12:06:52 lizum fstrim[590067]: /home: 14.7 GiB (15745425408 bytes) trimmed on /dev/nvme0n1p4
Jul 06 12:06:52 lizum fstrim[590067]: /boot/efi: 455.1 MiB (477224960 bytes) trimmed on /dev/nvme0n1p1
Jul 06 12:06:52 lizum fstrim[590067]: /: 97.3 GiB (104520609792 bytes) trimmed on /dev/nvme0n1p2
Jul 06 12:06:52 lizum systemd[1]: fstrim.service: Succeeded.
Jul 06 12:06:52 lizum systemd[1]: Finished Discard unused blocks on filesystems from /etc/fstab.
Jul 06 12:06:52 lizum systemd[1]: fstrim.service: Consumed 2.652s CPU time.
[chuck@lizum ~.510]$

Based on the Debian docs, the recommended way is to just enable fstrim.timer to enable the weekly schedule of trim. That’s why I did. Instead of waiting for the schedule to run though, should I do fstrim --fstab or just start fstrim.service and it will stop on its known after the trim run?

This is what I got:

root@nuc:~# systemctl status fstrim.service
● fstrim.service - Discard unused blocks on filesystems from /etc/fstab
   Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: enabled)
   Active: inactive (dead) since Thu 2021-07-08 01:23:45 PST; 44s ago
     Docs: man:fstrim(8)
  Process: 2433 ExecStart=/sbin/fstrim -Av (code=exited, status=0/SUCCESS)
 Main PID: 2433 (code=exited, status=0/SUCCESS)

Jul 08 01:22:45 nuc systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Jul 08 01:23:45 nuc fstrim[2433]: /mnt/storage: 904.9 GiB (971654193152 bytes) trimmed on /dev/disk/by-id/ata-S
Jul 08 01:23:45 nuc fstrim[2433]: /var: 4.1 GiB (4347596800 bytes) trimmed on /dev/nvme0n1p3
Jul 08 01:23:45 nuc fstrim[2433]: /tmp: 1.8 GiB (1928740864 bytes) trimmed on /dev/nvme0n1p5
Jul 08 01:23:45 nuc fstrim[2433]: /home: 178.7 GiB (191834804224 bytes) trimmed on /dev/nvme0n1p6
Jul 08 01:23:45 nuc fstrim[2433]: /boot/efi: 493.7 MiB (517681152 bytes) trimmed on /dev/nvme0n1p1
Jul 08 01:23:45 nuc fstrim[2433]: /: 18.9 GiB (20252884992 bytes) trimmed on /dev/nvme0n1p2
Jul 08 01:23:45 nuc systemd[1]: fstrim.service: Succeeded.
Jul 08 01:23:45 nuc systemd[1]: Started Discard unused blocks on filesystems from /etc/fstab.

It does look like it’s set to run fstrim -Av but I’m not sure if it’s still running at this point.