Database Cache Size Behavior Question

Plex Media Server 1.30.1.6483 has added DatabaseCacheSize, a lovely holiday treat:

Can Plex folks explain the internal behavior of this setting?

Previously Plex’s SQLite cache_size was 2000 pages, and Plex’s model database page_size is 1024 bytes. The end result was a SQLite cache of about 2MB.

This new setting defaults to 40MB, a significant increase. Yay!

Internally, is Plex setting this in megabytes (negative SQLite PRAGMA cache_size value) or is it still using a multiplier based on the page_size?

I ask because I’ve converted my databases to use a larger page_size.

Either way, I would encourage Plex to adjust the page_size of the template DB file as well. It makes a huge performance difference. The SQLite default was raised from 1024 to 4096 a long time ago. I think Plex’s template file was simply created a very long time ago.

Thanks!

3 Likes

Close but missing a key detail. That’s the cache size per connection and there are 20 of those. So no change in the default.

It reads the page_size and adjusts the cache_size accordingly.

Not going to happen. If you properly changed the page_size, you know the steps involved and how there are a few stages where an abrupt termination of the app (crash, kernel panic, power loss, etc) will completely destroy the database. This is an unacceptable risk and with the following, one not worth taking.
In our internal experiments, using a page_size of 4096 actually results in worse performance (when the cache size was adjusted to be the same number of bytes). Likely the performance change you were seeing is not the result of the change in page size but rather the related change in the size of the cache.

2 Likes

Thanks for the clarification, it is greatly appreciated.

When SQLite changed the default page_size from 1024 to 4096, they also changed the default cache_size from 2000 (pages) to -2000 (kilobytes), fixing it at the same 2MB size.

So it’s surprising to me that you saw worse performance. The benefit of fewer I/O operations and better alignment with filesystem and hardware block sizes should be substantial, even without a larger cache.


Is Plex dividing DatabaseCacheSize (40MB default) by 20 connections, and setting cache_size to a negative value, to ensure that it’s a specific number of bytes?

Or is it now setting cache_size to a positive value, based on the assumed page_size of 1024?


I wasn’t suggesting that Plex migrate the page_size of existing deployments! Only the template DB for new deployments.


The Plex support doc on repairing a corrupt database, and @ChuckPa’s script for checking/rebuilding a database, both produce new database files with the SQLite default page_size of 4096.

I mention that to ensure there’s alignment between expectations.

While I still doubt that page_size of 1024 is slower than 4096, if you have evidence that shows 1024 is faster … then the support document and @ChuckPa’s script aren’t in alignment with that.

1 Like

Oh!

Again, thank you for the detailed information.

Can you share any more information about how Plex uses multiple database connections?

Within a Plex process, how are transactions distributed across the connection pool?

Is the first available connection used?
Or are transactions distributed round robin across those connections?


Some folks are already setting DatabaseCacheSize to very large values, like 1GB, which doesn’t seem productive to me.

But they aren’t seeing an increase of memory that’s equally as large.

If the first available connection is used, and the last connections are actually rarely used, that might explain why they haven’t seen such large increases in memory consumption.

Using the sqlite speedtest1 benchmark utility, I see a big improvement with page_size 4096 even when cache_size is reduced.

I know this won’t align exactly with everything Plex does, I just wanted to share because your comments are very different from my experience in other environments.

/usr/local/src/sqlite $ rm testfile.db ; ./speedtest1 --size 200 --journal WAL --pagesize 1024 --cachesize 2000 testfile.db
-- Speedtest1 for SQLite 3.33.0 2020-08-14 13:23:32 fca8dc8b578f215a969cd8993363
 100 - 100000 INSERTs into table with no index.....................    0.344s
 110 - 100000 ordered INSERTS with one index/PK....................    0.477s
 120 - 100000 unordered INSERTS with one index/PK..................    0.534s
 130 - 25 SELECTS, numeric BETWEEN, unindexed......................    0.830s
 140 - 10 SELECTS, LIKE, unindexed.................................    0.615s
 142 - 10 SELECTS w/ORDER BY, unindexed............................    1.006s
 145 - 10 SELECTS w/ORDER BY and LIMIT, unindexed..................    0.545s
 150 - CREATE INDEX five times.....................................    1.388s
 160 - 20000 SELECTS, numeric BETWEEN, indexed.....................    0.506s
 161 - 20000 SELECTS, numeric BETWEEN, PK..........................    0.515s
 170 - 20000 SELECTS, text BETWEEN, indexed........................    1.813s
 180 - 100000 INSERTS with three indexes...........................    3.031s
 190 - DELETE and REFILL one table.................................    3.100s
 200 - VACUUM......................................................    3.356s
 210 - ALTER TABLE ADD COLUMN, and query...........................    0.035s
 230 - 20000 UPDATES, numeric BETWEEN, indexed.....................    0.860s
 240 - 100000 UPDATES of individual rows...........................    1.808s
 250 - One big UPDATE of the whole 100000-row table................    0.274s
 260 - Query added column after filling............................    0.040s
 270 - 20000 DELETEs, numeric BETWEEN, indexed.....................    2.874s
 280 - 100000 DELETEs of individual rows...........................    2.711s
 290 - Refill two 100000-row tables using REPLACE..................    4.313s
 300 - Refill a 100000-row table using (b&1)==(a&1)................    2.139s
 310 - 20000 four-ways joins.......................................    3.756s
 320 - subquery in result set......................................    7.260s
 400 - 140000 REPLACE ops on an IPK................................    2.020s
 410 - 140000 SELECTS on an IPK....................................    1.573s
 500 - 140000 REPLACE on TEXT PK...................................    1.509s
 510 - 140000 SELECTS on a TEXT PK.................................    1.621s
 520 - 140000 SELECT DISTINCT......................................    0.596s
 980 - PRAGMA integrity_check......................................    3.598s
 990 - ANALYZE.....................................................    0.342s
       TOTAL.......................................................   55.389s

Vs:

/usr/local/src/sqlite $ rm testfile.db ; ./speedtest1 --size 200 --journal WAL --pagesize 4096 --cachesize 500 testfile.db
-- Speedtest1 for SQLite 3.33.0 2020-08-14 13:23:32 fca8dc8b578f215a969cd8993363
 100 - 100000 INSERTs into table with no index.....................    0.207s
 110 - 100000 ordered INSERTS with one index/PK....................    0.297s
 120 - 100000 unordered INSERTS with one index/PK..................    0.370s
 130 - 25 SELECTS, numeric BETWEEN, unindexed......................    0.371s
 140 - 10 SELECTS, LIKE, unindexed.................................    0.412s
 142 - 10 SELECTS w/ORDER BY, unindexed............................    0.723s
 145 - 10 SELECTS w/ORDER BY and LIMIT, unindexed..................    0.343s
 150 - CREATE INDEX five times.....................................    0.649s
 160 - 20000 SELECTS, numeric BETWEEN, indexed.....................    0.406s
 161 - 20000 SELECTS, numeric BETWEEN, PK..........................    0.414s
 170 - 20000 SELECTS, text BETWEEN, indexed........................    2.008s
 180 - 100000 INSERTS with three indexes...........................    2.118s
 190 - DELETE and REFILL one table.................................    2.161s
 200 - VACUUM......................................................    1.107s
 210 - ALTER TABLE ADD COLUMN, and query...........................    0.016s
 230 - 20000 UPDATES, numeric BETWEEN, indexed.....................    0.562s
 240 - 100000 UPDATES of individual rows...........................    1.615s
 250 - One big UPDATE of the whole 100000-row table................    0.097s
 260 - Query added column after filling............................    0.017s
 270 - 20000 DELETEs, numeric BETWEEN, indexed.....................    2.562s
 280 - 100000 DELETEs of individual rows...........................    2.371s
 290 - Refill two 100000-row tables using REPLACE..................    2.851s
 300 - Refill a 100000-row table using (b&1)==(a&1)................    1.176s
 310 - 20000 four-ways joins.......................................    3.940s
 320 - subquery in result set......................................    3.175s
 400 - 140000 REPLACE ops on an IPK................................    1.978s
 410 - 140000 SELECTS on an IPK....................................    1.732s
 500 - 140000 REPLACE on TEXT PK...................................    1.469s
 510 - 140000 SELECTS on a TEXT PK.................................    1.597s
 520 - 140000 SELECT DISTINCT......................................    0.551s
 980 - PRAGMA integrity_check......................................    3.359s
 990 - ANALYZE.....................................................    0.193s
       TOTAL.......................................................   40.847s

You’ve made me regret replying in this topic. I’m not going to answer the myriad of questions you’ve asked because it’s just far too many and a lot are very complicated to answer.

Benchmarks are extremely artificial which is why the performance testing was done inside PMS itself. This was done with both a small test database and a large production database (hundreds of MB) and the performance was better with a 1k page size.

1 Like

And a few more for comparison.

Again I acknowledge that this isn’t the same as being inside Plex. :slight_smile:

This is all speedtest1, new files each time, varying the page_size and cache_size only.

# Variants of this command:
rm testfile.db ; ./speedtest1 --size 200 --journal WAL --pagesize 1024 --cachesize 2000 testfile.db | grep TOTAL
1024, 2000 = 2048000
       TOTAL.......................................................   55.506s
1024, 4000 = 4096000
       TOTAL.......................................................   37.873s
1024, 8000 = 8192000
       TOTAL.......................................................   28.760s
1024, 16000 = 16384000
       TOTAL.......................................................   25.225s
1024, 32000 = 32768000
       TOTAL.......................................................   23.631s


4096, 500 = 2048000
       TOTAL.......................................................   40.840s
4096, 1000 = 4096000
       TOTAL.......................................................   26.579s
4096, 2000 = 8192000
       TOTAL.......................................................   19.361s
4096, 4000 = 16384000
       TOTAL.......................................................   16.785s
4096, 8000 = 32768000
       TOTAL.......................................................   16.000s


8192, 250 = 2048000
       TOTAL.......................................................   43.045s
8192, 500 = 4096000
       TOTAL.......................................................   27.014s
8192, 1000 = 8192000
       TOTAL.......................................................   18.549s
8192, 2000 = 16384000
       TOTAL.......................................................   15.743s
8192, 4000 = 32768000
       TOTAL.......................................................   14.958s


16384, 125 = 2048000
       TOTAL.......................................................   50.152s
16384, 250 = 4096000
       TOTAL.......................................................   30.167s
16384, 500 = 8192000
       TOTAL.......................................................   19.542s
16384, 1000 = 16384000
       TOTAL.......................................................   15.579s
16384, 2000 = 32768000
       TOTAL.......................................................   14.589s


32768, 62 = 2031616
       TOTAL.......................................................   68.670s
32768, 125 = 4096000
       TOTAL.......................................................   39.802s
32768, 250 = 8192000
       TOTAL.......................................................   22.690s
32768, 500 = 16384000
       TOTAL.......................................................   16.824s
32768, 1000 = 32768000
       TOTAL.......................................................   15.124s

I observe the same catastrophe you mentioned. Very large page_size values, with constrained total cache_size, behave very poorly indeed.

But at page_size of 4096, while maintaining the same total cache_size, increasing page_size is a big win.

in synthetic tests.

I see your last comment. Please know how much I appreciate you sharing the information you’ve shared.

I’ve looked at the referenced link. However, I am unsure as to where I might find the @ChuckPA script mentioned in @Volts post?

Also, is a 501MB .db considered to be large? I am running PMS Version 1.30.1.6562. Am I likely to benefit from using “Added an advanced preference for DB cache size”? If so, any suggestions as to cache size? I’m on Ubuntu 18.04 with 64gb of memory. Typically, only 18gb is in use.

@rkv

A 500 MB com.plexapp.plugins.library.db is not large. It’s average.

We’ve not found any adjust page_size to anything OTHER THAN 4096.

On BSD and Linux, the default memory page size is 4096 bytes so it’s the most efficient.

If you’re looking for the script/tool itself, you’ll find it here: GitHub - ChuckPa/PlexDBRepair: Database repair utility for Plex Media Server databases

1 Like

Thanks. It worked like a charm. PMS definitely seems a bit more snappy after database rebuild and with much larger cache.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.