DB locks and wait times

@Ryushin

I recommend creating a fresh server (Preserve the DB)

After creation, migrate the watch history.

I’ve not seen anyone with as much difficulties as you have so therefore wonder if there is a corruption so deep that we’ll never find it. Possible ?

With the amount of time I have invested in my Plex layout, creating a fresh install is not very feasible.

Thing is, I never had this kind of problem until the update came out. Leaving off the “Scan library automatically” function and just have it run periodic every 15 minutes seems to be stable.

Did anything show up in the logs or DB? I can PM you the raw database and logs if you want. I would have to turn on Scan Automatically feature back on, let it freak out, then PM you the files.

I’ve tried migrating watch history before from one old DB to a brand new fresh install following your (plex tutorials) instructions and was SOL.
Had to resort to a trakt import via a not well maintained trakt plugin.

@Ryushin

I might have something about resolving this.

This is a sqlite3 query on a DB to see if it has the multiple-IVA record problem.

  1. Inspect
[chuck@lizum plexdb.2007]$ sqlite3 com.plexapp.plugins.library.db 
SQLite version 3.31.1 2020-01-27 19:55:54
Enter ".help" for usage hints.
sqlite> select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20
   ...> ;
761863|57236
524292|56545
54|58237
51|56604
51|57096
47|59056
34|55911
33|57576
31|56926
31|57314
30|22739
30|57175
30|57590
30|57890
30|58039
28|56713
28|57832
27|318
27|56934
26|57660
sqlite>

Notice items 57236 and 56545 have the crazy counts. These are the problem movies.

  1. For each item which has the high count, query it directly
sqlite> select metadata_item_id, related_metadata_item_id, created_at from metadata_relations Where (metadata_item_id = '57236' ) order by created_at desc limit 20;
57236|1083694|2021-11-14 03:14:01
57236|1083695|2021-11-14 03:14:01
57236|1083696|2021-11-14 03:14:01
57236|1083697|2021-11-14 03:14:01
57236|1083698|2021-11-14 03:14:01
57236|1083699|2021-11-14 03:14:01
57236|1083700|2021-11-14 03:14:01
57236|1083701|2021-11-14 03:14:01
57236|1083702|2021-11-14 03:14:01
57236|1083703|2021-11-14 03:14:01
57236|1083704|2021-11-14 03:14:01
57236|1083705|2021-11-14 03:14:01
57236|1083706|2021-11-14 03:14:01
57236|1083707|2021-11-14 03:14:01
57236|1083708|2021-11-14 03:14:01
57236|1083709|2021-11-14 03:14:01
57236|1083710|2021-11-14 03:14:01
57236|1083711|2021-11-14 03:14:01
57236|1083712|2021-11-14 03:14:01
57236|1083713|2021-11-14 03:14:01
sqlite> select metadata_item_id, related_metadata_item_id, created_at from metadata_relations Where (metadata_item_id = '56545' ) order by created_at desc limit 20;
56545|1345875|2021-11-23 02:45:04
56545|1345876|2021-11-23 02:45:04
56545|1345877|2021-11-23 02:45:04
56545|1345878|2021-11-23 02:45:04
56545|1345879|2021-11-23 02:45:04
56545|1345880|2021-11-23 02:45:04
56545|1345881|2021-11-23 02:45:04
56545|1345882|2021-11-23 02:45:04
56545|1345883|2021-11-23 02:45:04
56545|1345884|2021-11-23 02:45:04
56545|1345885|2021-11-23 02:45:04
56545|1345886|2021-11-23 02:45:04
56545|1345887|2021-11-23 02:45:04
56545|1345888|2021-11-23 02:45:04
56545|1345889|2021-11-23 02:45:04
56545|1345890|2021-11-23 02:45:04
56545|1345891|2021-11-23 02:45:04
56545|1345892|2021-11-23 02:45:04
56545|1345893|2021-11-23 02:45:04
56545|1345894|2021-11-23 02:45:04
sqlite> 

This confirms the records were added multiple times at during the same butler run.

If possible , try to identify which PMS version was installed at the time. We’re still working our way backwards looking for the individual change which caused it.

Step 3. Cleanup

Put the following into a file and run it using Plex SQLite (must use Plex’s SQLite tool and not stock sqlite3)

BEGIN TRANSACTION;

DELETE FROM metadata_items
WHERE guid LIKE '%api.internetvideoarchive%'
    AND id NOT IN (
        SELECT related_metadata_item_id 
        FROM metadata_relations
    )
;

DELETE FROM media_streams
WHERE media_item_id IN (
    SELECT media_streams.media_item_id 
    FROM media_streams
    INNER JOIN media_items ON media_streams.media_item_id=media_items.id
    WHERE media_items.metadata_item_id NOT IN (
        SELECT id 
        FROM metadata_items
    )
)
;

DELETE FROM media_parts 
WHERE media_item_id in (
    SELECT media_parts.media_item_id 
    FROM media_parts INNER JOIN media_items ON media_parts.media_item_id=media_items.id
    WHERE media_items.metadata_item_id NOT IN (
        SELECT id
        FROM metadata_items
    )
)
;

DELETE FROM media_items
WHERE metadata_item_id NOT IN (
    SELECT id
    FROM metadata_items
)
;

COMMIT;

VACUUM;

If you want to inject a select changes(*) to see how many records were deleted after COMMIT but prior to the VACUUM, that’s fine.

1 Like

Thanks Chuck,

My database is now 267MB. Though I had a go about a different way to get it repaired.

The SQL Query to look for multiple-IVA record problem:

select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20;

114695|137623
98307|11999
57351|111941
63|112456
58|139966
58|139977
54|75439
51|139849
50|75368
50|75378
50|75379
50|77494
50|115766
50|140084
50|954968
50|955032
50|957052
50|957440
50|958860
50|959195

So I had 3 bad entries. I queried each item directly and confirmed that they had been added multiple times.

I saved the SQL script that you posted and run it and I had no change in the database size and it was finished in only few seconds. I created a SQL script that I could specify the specific ID that was provided here:

But even that did not work. I found the query to identify which where the problem titles (would have been better to actually get the file name):

sqlite> select id, title, guid, refreshed_at from metadata_items Where id = '137623';
137623|Logan|plex://movie/5d776abffb0d55001f54faa3|2021-10-06 00:12:22

sqlite> select id, title, guid, refreshed_at from metadata_items Where id = '11999';
11999|Alien - Directors Cut|plex://movie/5d7768254de0ee001fcc83a5|2021-11-29 06:35:16

sqlite> select id, title, guid, refreshed_at from metadata_items Where id = '111941';
111941|Logan|plex://movie/5d776abffb0d55001f54faa3|2021-09-26 23:33:46

Interesting that the files that had issues where files that Plex merged into a single entry that I had to split apart. Alien - Directors Cut and Alien - Theatrical Cut where merged into one item along with Logan and Logan Noir as well. I have three different sized versions for these movies, one Native 4K HDR for local viewer and streaming versions 2K HDR and 720p (from 1080p source).

I shut down PMS, and pulled all the versions of Alien Theatrical, Alien Director’s Cut, Logan, and Logan Noir out of their respective libraries. I started PMS back up. Then kicked off a scan of each library and waited for it to complete … until it hit the entry that was the problem, then the Activity spinner on the Web UI just kept spinning. Watching the log though, was more encouraging as I saw constant stream of Deleting entries occurring. Tailing the log and grepping just for Deleting I saw about 3-10 entries per second being deleted. This ran for several hours.

When it stopped, I checked to see of those three bad entries were gone. The database was still about the same size. I ran that SQL script against the database and it shrunk all the way down to 267MB from the 600-2000MB it had grown to.

So I think I’m good now. Not sure if the duplicate merged entries caused this problem. I’ve been installing the PMS Beta versions as they have been released. Though it would be nice if there was a Stable channel instead of just Public.

May I see the DELETE FROM statements you used so I can try it on others’ databases ?

If I can find the sequence to do it via SQL, I’ll craft some script-foo to construct SQL statements and perform the deletions with PMS stopped (which will be MUCH faster).

What I’m hoping to craft (you’ve seen my script-foo :wink: ) is

  1. run the first query.
  2. Get the list of big hitters
  3. craft a SQL script which removes those records from the DB (PMS stopped)
  4. Vacuums the DB back down to a smaller size so the next (PMS) pass can finish the task if needed.
1 Like

My com.plexapp.plugins.library.blobs.db is 670mb, and my com.plexapp.plugins.library.db is 199mb, I guess that counts as big?

I’ve been debugging;
QueryParser: Invalid field ‘pinnedContentDirectoryID’ found, ignoring.
QueryParser: Invalid field ‘contentDirectoryID’ found, ignoring.
Exception caught determining whether we could skip ‘The Day the Earth Stood Still (1951) [720p]’ ~ Null value not allowed for this type

etc and found my way here. I’m guessing I am experiencing a janked DB issue, I’m going to check it with sqlite soon, but I just re-did my whole DB this week on a clean install, i’ll check and see and try the cleanup solution.

Edit: I just saw Ryu talking about titles that plex combined together, and I recall when I set the server up again I had to split apart a few movies it combined (some that had nothing to do with one another too which was weird.), however not nearly as many that seem to have issues in the log (maybe about 20 titles out of 1,500).

I’ll wait for this script you’re working on to run against mine and see what happens.

Folks, For those using Linux-based systems, I have a QUERY script for you.

The SOLE purpose of this script is to help identify those titles where the database ballooned.

What we’re looking for are those files where the Count is well above any reasonable sense of what you actually have.

I have preset a lower limit of 100. Any title with fewer than 100 items will not be examined since it’s possible to have 100 extras for television series across all seasons.

I’ve also set the number of items to be returned as the top 20. If this count is too low, it is easily increased.

============================

To use:

  1. Edit line 11 to point to the ‘Databases’ directory
  • Docker will be be prefixed by /config
  • Workstation & Desktop Linux will be prefixed by /var/lib/plexmediaserver
  1. sqlite3 must be installed on the host to query the DB

  2. Run the script and it will generate a report telling you

  • Media item ID
  • How many records exist in the DB
  • Title of the item
  • When it was updated

It is hoped that this date will coincide with an identifiable activity.

Please don’t hesitate to ask if you have further questions.

#!/bin/bash

# Verify sqlite3 is installed

if [ "$(command -v sqlite3)" = "" ]; then
  echo ERROR: \'sqlite3\' not found.
  exit 1
fi

# EDIT this
DBDirectory='/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases'

# DO NOT EDIT BELOW THIS POINT
DB="$DBDirectory/com.plexapp.plugins.library.db"


# Verify we can read the DB
if [ ! -r "$DB" ];  then
  echo ERROR: Cannot read database at \"$DB\"
  exit 2
fi

for i in $( echo 'select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20;'| sqlite3 $DB ) x
do

  if [ $i != "x" ]; then

    # if not EOL, parse  (Count|Item)
    Count=$(echo $i | awk -F'|' '{print $1}')
    Item=$(echo $i | awk -F'|' '{print $2}')

    if [ $Count -gt 100 ]; then

      # Now get title from DB and print output
      NameString="$(echo "select id, title, guid, refreshed_at from metadata_items Where id = '$Item';" | sqlite3 com.plexapp.plugins.library.db)"

      Title="$(echo $NameString | awk -F'|' '{print $2}')"
      Updated="$(echo $NameString | awk -F'|' '{print $4}')"

       # Make pretty output

      echo Item: $Item "|" Count: $Count "|" Title: \"$Title\" "|" Updated: $Updated
    fi
  fi
done

As we progress, (I’m also sharing this with our DB folks), it’s my hope we can surgically remove these extra records from the DB quickly and easily.

2 Likes

Supplemental:

I’ve been informed that the work needed in PMS is nearing completion.

I don’t know if it’s necessary to add “DELETE” code to this as the full correction from Engineering is expected very soon.

2 Likes

This is awesome. The support / interaction I am seeing on the forums this past year is much better than years ago when there was 0 communication.

I was about to manually just go and delete these items because they’re the same ones each time I run a full scan, but I’ll setup an sql jail and try running that script if it will help.

I have an old database from the 27th october that is 178mb, will it help any to run it against that too?

My DBs have always been huge I think because of all my media, about 1,500 movies / 150 TV shows, my metadata drive is at about 150gig of storage (does plex make sure to clean up unused metadata in there? I know there’s the ‘clean up’ option, but just makes me wonder with all those thousands of files.)

@paradoxhub

  1. I have been doing my best to be that support here in the forums for the past few years. It took a looooooooooooooooooooooooong time to get through all the backlog in addition to all the development work I’ve been involved with (NAS & Linux Packaging)

  2. “Unknown Metadata type” can be safely ignored. You’re seeing Engineering’s work. They are laying the groundwork for a better way of handling metadata. First you write the ‘sender’ then you write the ‘reader’. When the “red” is gone, it’s all being handled .

Regarding the DB

  1. I DO NOT support manual DB edits. That little script of mine does not WRITE. It is a READ-ONLY operation. As I stated, its sole purpose is to identify those mediafiles which have a LOT of extra records in the database (over 100).
  • It’s up to you to determine if what it flags is valid (i.e. If it says you have 90 records… Do you have 90 extras for that item (Series or Movie) when you count your local extras plus what Plex provides ?
  1. If you are referring to the identification of some high duplicate counts, please know this
  • Engineering is just about finished with their work.

  • It’s so close to now, I wouldn’t be surprised if 1.25.2 gets updated or replaced with 1.25.3 without ever going to “Public” release.
    – Please don’t quote me on this… I’m sharing my thoughts NOT promises

  1. If you want to move those files identified out from where watched and then use the existing “Scan, Empty Trash, Refresh Metadata, Clean Bundles” steps – then OK.

Lastly, how big is your com.plexapp.plugins.library.db now ?

2 Likes

Thanks for the reply.

I haven’t had chance yet to do anything to my DB files. Don’t worry I’ll be sure to back them up before I mess with them, I think I will clone my Plex jail and duplicate my server and use the duplicate for debugging this and anything else.

This is probably general info elsewhere, but what exactly is all the metadata files? My movies have maybe, 5-10 posters each to choose from, chapter thumbnails (although not on a lot of things for some reason it just stops generating them after I start packing on my libraries), even that stuff doesn’t make sense for me to be seeing like 100 files in one metadata directory, all seemingly part of the same thing? I mean what is all this stuff exactly, surely you just need a text file for the text metadata, the posters, and the thumbnails? I just can’t make heads or tails of the cache dir there’s so much stuff.

Anyway I know my stuff doesn’t actually really affect the server running, but just knowing something isn’t working how it should makes me have to fix it, i’m sure you know what I mean by that.

Can you elaborate on this work the engineers are doing? Some kind of massive overhaul to the database? Would I be best to just forget about this stuff and re-do everything when that comes out?

“Metadata” includes all the added & displayed content Plex gives you over and above simply displaying your TV episodes and movies.

Examples include:

  • Genre
  • Cast
  • Links to other works by those cast members
  • Collections

Those type of things. All the extra information to enrich your basic media file.

I don’t know the details of what they’re doing beyond that they have been working on correcting this ‘size ballooning’ problem.

Most of us have database file sizes in the 200-300 MB range (even for servers like mine with my content of 50,000 files; movies, episodes, and music.)

There were those users who experienced a ballooning effect. The database grew in size from 200 MB → 3 GB for no reason.

The cause here has been identified and repaired. It won’t happen again.

What they’re finishing now is how to “clean out that junk” which crept in without loss of any information.

I would not ‘burn down’ an existing server.

I would give Engineering a chance. Let’s not do anything without first evaluating what they have.

I have some captured examples of databases reconstructed here.

When they have software for me to test – I’ll test it.
Barring any “tweaks” you’ll see it within a day or two after I do.

I don’t know if they’ll replace the existing 1.25.2 or replace it with 1.25.3. That’s their decision.

1 Like

i have massive problems too
(and made this thread about it)](Slow loading of movie page)

@plexmei

May I please see the DEBUG log ZIP file which captures this?

Please respond back in your existing thread.

Sorry for the late reply. I did not add any Delete From statements, other then what was in the script from the “Slow library performance due to increasingly growing database” thread. Even then, I’m afraid that script did not do anything for me.

What did it for me, is to identify the titles that had ballooned. I pulled those out of the library. Then started PMS back up and I saw that it was Deleting those bad entries which took several hours. Then I ran the sql.txt script you provided and it shrunk the database down to 274MB. Everything seems to be running good at this point.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.