Held transaction for too long (../Statistics/StatisticsManager.cpp:253)

  1. 130382
  2. 114849
  3. Error: no such table: metadata_items_views
  4. 11046
  5. 130470

Not sure if for 3 you meant ‘metadata_item_views’ or not, if so, the count for that is 92067. Interestingly, when I try to perform a manual Vacuum on the SQLite database, I get the following:

Error: no such collation sequence: naturalsort

Perhaps there’s an error in the DB that’s been preventing the automated optimization from being successful?

It might be.

[chuck@lizum Databases.236]$ sudo sqlite3 com.plexapp.plugins.library.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> select count(*) metadata_items_views;
1
sqlite> 

If you’d like, You can also copy the DB (plex not running of course),
and export / reimport it. It will ensure all records for each record are properly packed.

The general procedure is to

  1. Manually drop the naturalsort related indexes
  2. .dump the DB to a .sql file ( echo .dump | sqlite3 com.plexapp.plugins.library.db > file.sql )
  3. Rename the db
  4. sudo sqlite3 com.plexapp.plugs.library.db < file.sql
  5. sudo chown plex:plex com.plexapp.plugins.library.db

(verify this… make sure the resultant file perms are the same.
Also, if you do this to a test file… you can see how much compaction really is occurring. You’ll be able to directly measure how much was lost to SQLite3 overhead due to additions/deletions or hung up because it couldn’t optimize.

https://support.plex.tv/articles/201100678-repair-a-corrupt-database/

1 Like
-rw-r--r-- 1   1000  1000 661524480 Jul  9 23:51 com.plexapp.plugins.library.db
-rw-r--r-- 1   1000  1000 730649600 Jul  9 23:46 com.plexapp.plugins.library.db.original

I’ll fire it up and see how its going come morning…

…and it was non-responsive to the UI / showing offline again this morning. Should we be concerned that the WAL file is nearly the size of the database itself?

-rw-r--r-- 1 1000 1000 672100352 Jul 10 08:28 com.plexapp.plugins.library.db
-rw-r--r-- 1 1000 1000     32768 Jul 10 08:28 com.plexapp.plugins.library.db-shm
-rw-r--r-- 1 1000 1000 669479432 Jul 10 08:28 com.plexapp.plugins.library.db-wal

Dumping everything to a .SQL file eliminates the need for the .WAL and .SHM.

My apologies in forgetting to tell you it’s ok to remove them after you’ve exported (.dump)

.wal and .shm are the pent up / transactions within SQLire3 itself. If these are this size:

  1. a lot of ungraceful shutdowns must have occurred - these are in-operation transaction files
  2. they are normally removed by the SQLite3 library itself as PMS shuts down (signal 15)

addendum to the above:

SQLite3 will search the WAL / SHM files before actually accessing the main DB making it a good reason for committing permanently to the DB with the .dump and purging after.

Ah, so, that listing with the shm and huge wal was from this morning after having let Plex run overnight, after I’d done the .dump, wipe and reimport of the .sql file… The files don’t appear to get removed when I stop the docker though…

Ah. Docker = :imp: lol

If PMS is sent a signal 15 (term), it will stop nicely.
If it’s aborted harshly, (kill), they will remain.

I know folks do it but I see no benefits of docker on linux when Linux is more portable, especially between different distros or even different Linux-based NAS boxes.

Retagging for Docker. It’s one thing I can’t get my head around and unable to support.

Some folks try to run Plex on Windows docker. Total nightmare.

Sorry.

I’m seeing several of these errors every second (StatisticsManager.cpp / StatisticsMedia.cpp).

The service seems to be running ok, but has been slow to start up playing a show recently.

I ended up resolving this by increase the cache size of the SQL db, mine was set to -2000 for some reason.

1 Like

I’ve seen this solution somewhere else with an following argument that “default_cache_size” is deprecated. Does this really still work?

But the value in default_cache_size will be lost after a restart or not?

No, it appears to persist after reboot

Hi. The same issue on synology 918+ with Plex db on SSD.
I can see 25% CPU load for hours.

I’m having the same issues as the other people in this thread,
my library is pretty small compared to some others here < 20000 items, mostly mp3
Server is a Xeon E31220L at 2,2Ghz (2 cores, 4 threads), 12GB Ram running with Ubuntu 18.04-2.
I have this problem now for some weeks, and updating didn’t help…
From what I see this happens when the scanner is running and looking for new items. This took bout a few seconds in earlier versions, now it runs forever and locks a CPU thread to 100% all the time.
Is there any working solution for this?

Last worked version is 1.14. All versions after have bug with db deadlock.
See Held transaction for too long (../Statistics/StatisticsManager.cpp:29)

1 Like

After I read that thread I come to the conclusion that
a) no one is willing to fix it and
b) the only way to solve it is going back to 1.14?
My server is currently running most of the time, but I disabled automatic scans for new media completely. The " Held transaction for too long (…/Statistics/StatisticsMedia.cpp:29)" still occurs all the time.

Only way of getting rid of these warnings is to downgrade to 1.14. I’ve tried every version since.
Currently on 1.17.0.1841 and logs are full of these, but does not seem to affect my performance but I got a few “slow query” here and there, and I have since 1.14. @ChuckPa said in another thread that these lockdowns is not an issue, but you might want to post your logs.

(I’m running plex on Windows. Be careful, if you’re not running docker you might need a permit from the queen of england and the pope to post in a thread @ChuckPa has loldockered)

I try not to be evil :cry:

:rofl:

It does help us (staff) to keep everything in its place. Referencing another thread in a “Like this: …link here.” fashion really helps as it shows the scope of the issue (platform specific or not).

As for backing down to 1.14 to make it go away; I don’t believe those statistics were in the 1.14.x builds. They were introduced in, AFAIK, in 1.15.x

Since they related to database contention, two things come immediate mind which should be investigated

  1. Is a lot of media being scanned, updated, or added when it occurs.
  2. Is the system under load with transcoding and are media operations (to a lesser extent) occurring.

If we can pinpoint the root cause and recreate repeatedly, maybe we can help engineering get to the lines of code which need adjusting.

In my case it’s just a steady stream of these warnings. Went from 1.14 to 1.17 today and right from the start I get these with no scan running, media being updated or added. No activity from any clients. Pretty much no load at all. Tonight when family members stream and there is ‘some’ load (I got a p2000) I still get the same steady stream of warnings every minute with the occational burst of 5-6 warnings within a few seconds. Doesn’t seem to make the performance any worse, they are just annoying. (A few versions back I thought this affected performance, but it turned out to be the srt->ass bug). 1.17 seems to be working ok-ish, except all these warnings ofcourse.