Database size increasing significantly above the ~100MB Expectation

Assuming Plug-ins = Channels, i have none.
If Plugins = Outside sources like monitoring i can do that.

Plug-ins = “Channels” . Channels is the old name because they originally were “streaming media channels”. Now they’re used to augment PMS features. The name is changing … slowly. :slight_smile:

I’d like to increase your log file retention a bit. It will help with debugging.

Stop PMS
Edit Plex Media Server/Preferences.xml
Add LogNumFiles="10" before the XML close
Start PMS

Also please enable Debug logging. We need to see what this is:

Mar 02, 2018 06:25:45.200 [0x7f77283ff700] ERROR - Soci Exception handled: sqlite3_statement_backend::prepare: near ",": syntax error for SQL: select upper(substr((select, 1, 1)) as c,count(distinct metadata_items.id) from metadata_items left join metadata_item_settings on metadata_item_settings.guid=metadata_items.guid and metadata_item_settings.account_id= 1 where metadata_items.library_section_id=? and metadata_items.library_section_id in (3) and metadata_items.metadata_type=2 group by c order by c

Log retention increased, and Debug re-enabled. Also fingers-crossed.

Just scrolled through “Movies” Library slowly so that all posters loaded before moving on to the next few rows and made the DB grow by 2MB… trivial… but thats 1 time. Attached tgz

Not borked logs -,-

No Soci error trap in that set.

Make it grow more / do whatever you do.

grep -i soci *.log and look for the SQL error like you previously found.

Yeah i still have that running, Repetitions of the same steps prior dont cause it to grow more. Im browsing the library from both my shield, my phone, and the webUI. Its also late here so not much foot traffic on the server.

Clear their caches. they’re running on the cached images now and not hitting the DB

Hallelujah you were right. It triggered the exception. Doing a watch du -h com.plexapp.plugins.library.db during the mass mark all unwatched caused the exception and a gain of 7MB but marking them all watched did absolutely nothing for database growth. That should help you find WHEN it happens. Attached is the tgz of the logs with 2 occurrences of the Soci exception

Great. Logs attached to bug report.

If you added 7MB for one hit, What punched it up to 10 times original size? Must have been a lot state setting through the API.

While it shouldn’t grow much, if at all, this is the a lead in to the bigger issue.

If you can make it jump by 50 MB, that’s substantial

So we have file/folder structure size:

Top Level:

nick@san2:/mnt$ ls
lost+found  movies  movies4k  tv

Directories:

nick@san2:/mnt$ find . -type d | wc -l
4032

Files:

nick@san2:/mnt$ find . -type f | wc -l
21838

Extra eyes on the issue points to something hitting the backend at an insane rate.

Mar 09, 2018 23:58:22.157 [0x7fa014bff700] DEBUG - Completed: [10.0.10.3:52058] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=20282 (40 live) TLS GZIP 527ms 268 bytes (pipelined: 93)
Mar 09, 2018 23:58:22.310 [0x7fa0143fe700] DEBUG - Completed: [10.0.10.3:52059] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=20370 (40 live) TLS GZIP 107ms 268 bytes (pipelined: 94)
Mar 09, 2018 23:58:23.085 [0x7fa0143fe700] DEBUG - Completed: [10.0.10.3:52095] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=18547 (40 live) TLS GZIP 728ms 268 bytes (pipelined: 34)
Mar 09, 2018 23:58:23.264 [0x7fa0143fe700] DEBUG - Completed: [10.0.10.3:52058] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=20623 (40 live) TLS GZIP 132ms 268 bytes (pipelined: 94)
Mar 09, 2018 23:58:23.575 [0x7fa0143fe700] DEBUG - Completed: [10.0.10.3:52059] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=20389 (40 live) TLS GZIP 264ms 268 bytes (pipelined: 95)
Mar 09, 2018 23:58:24.073 [0x7fa014bff700] DEBUG - Completed: [10.0.10.3:52095] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=18263 (40 live) TLS GZIP 448ms 268 bytes (pipelined: 35)
Mar 09, 2018 23:58:24.783 [0x7fa0143fe700] DEBUG - Completed: [10.0.10.3:52058] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=20646 (42 live) TLS GZIP 663ms 268 bytes (pipelined: 95)
Mar 09, 2018 23:58:24.906 [0x7fa014bff700] DEBUG - Completed: [10.0.10.3:52059] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=20757 (42 live) TLS GZIP 76ms 268 bytes (pipelined: 96)
Mar 09, 2018 23:58:25.103 [0x7fa014bff700] DEBUG - Completed: [10.0.10.3:52095] 200 GET /:/scrobble?identifier=com.plexapp.plugins.library&key=20432 (41 live) TLS GZIP 149ms 268 bytes (pipelined: 36)

The syntax error in the logs, while a regression, is sort filter based. A separate issue has been created for that.

The magic incantation which recreates the explosive growth still eludes identification.
We can only ask to keep your thinking hat on and see what it is.

Tautulli ?

You did disable/remove plugins as part of this effort . What is hammering the backend above?
If that element is removed and one of the other, now inactive, elements/plug-ins is brought back into play, the process of elimination can advance (?)

10.0.10.3 is my desktop where I marked everything watched/unwatched from the webUI. I had it sorted by unwatched so that when I clicked through all the shows I wouldn’t click the couple I haven’t finished yet.

And if you remember we decided to keep the monitoring on as we wanted to keep the environment as similar normal as possible when trying to invoke the error

Ok, all good.

As I remember, correct me if incorrect, we minimized down as first step. That not providing the expected repeat failure, I ask, is the configuration and operation of everything as it was prior to the explosive growth.

The sequence I thought we did here was:

  1. stop
  2. export DB
  3. re-import into fresh
  4. observe massive size reduction

Correct?

If so, since you know DBs better than I, is it possible for SQLite to leave uncompleted tasks ‘floating’ in the file and those temporary spaces not get exported to the ascii file during a .dump operation?

It’s actually doing MORE than what it was initially as I added Trakt so that I would have a plug-in to add strain to the server. An export import did nothing to change db size as the size was in a valid table. The metadata table. This issue isn’t a shadow table or hidden property issue (I don’t believe at least) because the way we massively reduced the size was by clearing out all but 10 of the valid metadata table rows. (Then vacuuming to reclaim the empty “air pocket” for lack of a better description on disk

you removed all but 10 rentries from metadata_item_views, right?

That’s what I do not understand (not a developer).

Of those INSERT statements seen, how many of them, if any, were inserting duplicated / already entered entries rather than Update values?

This is where my thinking is at. It thinks it must INSERT (new item) rather than UPDATE (existing) because if I read the logic correctly, you should have, at most, one entry per media item (file) in your library.

Right, since I still have a “bad” database backup, I can probably do a COUNT where COLUMN x is duplicate and see if it thinks there are duplicates and what is different/unique in each row?

(Tonight of course. SXSW continues haha)

To paste the prior information as stated:

sqlite> SELECT thumb_url, COUNT(*) c FROM metadata_item_views GROUP BY thumb_url HAVING c > 1;
metadata://seasons/9/episodes/7/thumbs/com.plexapp.agents.thetvdb_c4964525fd60a77af0147b7e93f84e9d011d7b45|8
metadata://seasons/9/episodes/8/thumbs/com.plexapp.agents.thetvdb_02a13383ecb94bdae17a10b32f7e2ce696228a5b|8
metadata://seasons/9/episodes/8/thumbs/com.plexapp.agents.thetvdb_34ab3385de9dfde9566311b87865664593289c08|8
metadata://seasons/9/episodes/8/thumbs/com.plexapp.agents.thetvdb_35b56b83fe352dccba324a6646e96aed55345ae8|8
metadata://seasons/9/episodes/8/thumbs/com.plexapp.agents.thetvdb_9074f6e008ed2f8c6cf7258977cedbd2cdd51b6c|8
metadata://seasons/9/episodes/8/thumbs/com.plexapp.agents.thetvdb_b22b530333ea51c0cd59bd52a38493c6afba52a7|8
metadata://seasons/9/episodes/8/thumbs/com.plexapp.agents.thetvdb_e4c71a26110f7b48a0fbc92cb39f585e6f269123|8
metadata://seasons/9/episodes/8/thumbs/com.plexapp.agents.thetvdb_f76802516c7e80fc62888a475f83fed8d4aefb64|8
metadata://seasons/9/episodes/9/thumbs/com.plexapp.agents.thetvdb_2ea7e8aec0d75e5844ada26667090cdd2c207d6e|8
metadata://seasons/9/episodes/9/thumbs/com.plexapp.agents.thetvdb_31521e466c39d5a98fd17b130d8ee740ace6b311|8

(that goes on for more than 1000 lines)

This really isnt as helpful as i had originally thought as the above is the only unique column of the table other than the key. This tells us there are duplicates but not WHY. We already knew there were duplicates.