Used a stopwatch and it took around 20 seconds with all the clicks and prompts for those three tasks. The optimize database took just maybe about 5 seconds or so. Here is the Logs ZIP:
Kevin,
The file com.plexapp.plugins.library.db is the actual database. The other ‘blobs’ database is for supporting info.
I was concerned the WAL and SHM files had grown in size but these are correct for a normal running system. (SQLite3 folds them into the DB file during normal shutdown). They are cache files.
Given your DB is 171 MB, on that i7 NUC, there is NO WAY for you to be getting the crappy DB performance unless it’s how you’re storing the metadata . You have a 10,000 Passmark CPU.
Is it on internal SSD or on an external USB drive ?
[root@lizum chuck]# sysbench cpu --threads=8 run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)
Running the test with following options:
Number of threads: 8
Initializing random number generator from current time
Prime numbers limit: 10000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 8356.28
General statistics:
total time: 10.0010s
total number of events: 83581
Latency (ms):
min: 0.81
avg: 0.96
max: 41.12
95th percentile: 0.97
sum: 79978.97
Threads fairness:
events (avg/stddev): 10447.6250/81.47
execution time (avg/stddev): 9.9974/0.00
[root@lizum chuck]#
I constantly update this NUC’s firmware. My last update was probably two months ago so it’s pretty recent if there’s already a newer firmware now (I have to check). I’m remote from the NUC so I can’t check the BIOS/UEFI settings but I always make it to a point to use settings that are for “performance” mode but I still have to check when I have the chance (next next weekend when I’m there).
However, here are my bench results:
root@nuc:~# sysbench cpu --threads=8 run
sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)
Running the test with following options:
Number of threads: 8
Initializing random number generator from current time
Prime numbers limit: 10000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 7806.54
General statistics:
total time: 10.0009s
total number of events: 78083
Latency (ms):
min: 0.79
avg: 1.02
max: 1.42
95th percentile: 1.21
sum: 79987.62
Threads fairness:
events (avg/stddev): 9760.3750/1330.43
execution time (avg/stddev): 9.9985/0.00
It looks like yours is still faster Are those results from a NUC as well?
It looks like you’re running 5% slower but the CPU is 25% faster composite so unless you’ve got the clock spun up somewhat, that 1.1 Ghz base , and any latency before it jumps up, is going to bite you . My machine’s base clock is higher so I get the jump.
Still that doesn’t justify the nasty results.
When’s the last time you started fstrim.service ?
Linux should be doing it – BUT – if it’s not, your I/O is going to tank.
If you’ve never run it before
It will appear to hang while it trims the SSD. This is NORMAL
lastly , you’ll probably want to enable it. systemctl enable fstrim.service
So yours has fewer cores/threads but higher base frequency. Mine has more cores/threads and higher max turbo frequency. I ran the bench again with 12 threads and here’s what I got:
root@nuc:~# sysbench cpu --threads=12 run
sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)
Running the test with following options:
Number of threads: 12
Initializing random number generator from current time
Prime numbers limit: 10000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 9515.01
General statistics:
total time: 10.0010s
total number of events: 95173
Latency (ms):
min: 1.11
avg: 1.26
max: 25.83
95th percentile: 1.30
sum: 119986.52
Threads fairness:
events (avg/stddev): 7931.0833/36.96
execution time (avg/stddev): 9.9989/0.00
Does that look about right to you?
As for fstrim.service, I have never run it. I thought Linux automatically runs trim like Windows does I also checked the status and it looks like it’s disabled:
root@nuc:~# systemctl status fstrim.service
● fstrim.service - Discard unused blocks on filesystems from /etc/fstab
Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: en
Active: inactive (dead)
Docs: man:fstrim(8)
So for Linux installs, TRIM is something that you need to manually enable? Should I simply enable the service and the TRIM process will automatically take care of the SSD?
Ok, I’ll read about this first and make sure trim runs on the SSD’s I have. After running trim, it’s just a matter of checking the Plex logs if they still show those slow query warnings, correct?
Based on the Debian docs, the recommended way is to just enable fstrim.timer to enable the weekly schedule of trim. That’s why I did. Instead of waiting for the schedule to run though, should I do fstrim --fstab or just start fstrim.service and it will stop on its known after the trim run?