Not sure if for 3 you meant ‘metadata_item_views’ or not, if so, the count for that is 92067. Interestingly, when I try to perform a manual Vacuum on the SQLite database, I get the following:
Error: no such collation sequence: naturalsort
Perhaps there’s an error in the DB that’s been preventing the automated optimization from being successful?
[chuck@lizum Databases.236]$ sudo sqlite3 com.plexapp.plugins.library.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> select count(*) metadata_items_views;
1
sqlite>
If you’d like, You can also copy the DB (plex not running of course),
and export / reimport it. It will ensure all records for each record are properly packed.
The general procedure is to
Manually drop the naturalsort related indexes
.dump the DB to a .sql file ( echo .dump | sqlite3 com.plexapp.plugins.library.db > file.sql )
(verify this… make sure the resultant file perms are the same.
Also, if you do this to a test file… you can see how much compaction really is occurring. You’ll be able to directly measure how much was lost to SQLite3 overhead due to additions/deletions or hung up because it couldn’t optimize.
…and it was non-responsive to the UI / showing offline again this morning. Should we be concerned that the WAL file is nearly the size of the database itself?
SQLite3 will search the WAL / SHM files before actually accessing the main DB making it a good reason for committing permanently to the DB with the .dump and purging after.
Ah, so, that listing with the shm and huge wal was from this morning after having let Plex run overnight, after I’d done the .dump, wipe and reimport of the .sql file… The files don’t appear to get removed when I stop the docker though…
If PMS is sent a signal 15 (term), it will stop nicely.
If it’s aborted harshly, (kill), they will remain.
I know folks do it but I see no benefits of docker on linux when Linux is more portable, especially between different distros or even different Linux-based NAS boxes.
Retagging for Docker. It’s one thing I can’t get my head around and unable to support.
Some folks try to run Plex on Windows docker. Total nightmare.
I’m having the same issues as the other people in this thread,
my library is pretty small compared to some others here < 20000 items, mostly mp3
Server is a Xeon E31220L at 2,2Ghz (2 cores, 4 threads), 12GB Ram running with Ubuntu 18.04-2.
I have this problem now for some weeks, and updating didn’t help…
From what I see this happens when the scanner is running and looking for new items. This took bout a few seconds in earlier versions, now it runs forever and locks a CPU thread to 100% all the time.
Is there any working solution for this?
After I read that thread I come to the conclusion that
a) no one is willing to fix it and
b) the only way to solve it is going back to 1.14?
My server is currently running most of the time, but I disabled automatic scans for new media completely. The " Held transaction for too long (…/Statistics/StatisticsMedia.cpp:29)" still occurs all the time.
Only way of getting rid of these warnings is to downgrade to 1.14. I’ve tried every version since.
Currently on 1.17.0.1841 and logs are full of these, but does not seem to affect my performance but I got a few “slow query” here and there, and I have since 1.14. @ChuckPa said in another thread that these lockdowns is not an issue, but you might want to post your logs.
(I’m running plex on Windows. Be careful, if you’re not running docker you might need a permit from the queen of england and the pope to post in a thread @ChuckPa has loldockered)
It does help us (staff) to keep everything in its place. Referencing another thread in a “Like this: …link here.” fashion really helps as it shows the scope of the issue (platform specific or not).
As for backing down to 1.14 to make it go away; I don’t believe those statistics were in the 1.14.x builds. They were introduced in, AFAIK, in 1.15.x
Since they related to database contention, two things come immediate mind which should be investigated
Is a lot of media being scanned, updated, or added when it occurs.
Is the system under load with transcoding and are media operations (to a lesser extent) occurring.
If we can pinpoint the root cause and recreate repeatedly, maybe we can help engineering get to the lines of code which need adjusting.
In my case it’s just a steady stream of these warnings. Went from 1.14 to 1.17 today and right from the start I get these with no scan running, media being updated or added. No activity from any clients. Pretty much no load at all. Tonight when family members stream and there is ‘some’ load (I got a p2000) I still get the same steady stream of warnings every minute with the occational burst of 5-6 warnings within a few seconds. Doesn’t seem to make the performance any worse, they are just annoying. (A few versions back I thought this affected performance, but it turned out to be the srt->ass bug). 1.17 seems to be working ok-ish, except all these warnings ofcourse.