Slow library performance due to increasingly growing database

Hi

I will follow up with all the diagnostics you provided tomorrow - but would like confirmation if I need to change any of the instructions that I have

Step 1

Identify if the issue relates to extras and identify the metadata id’s affected

select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20

Step 2 (extra info for me)

Identify the movie and guid (for each of those returned above with lots of extras)

select id, title, guid, refreshed_at from metadata_items Where id = 'xxxxx'

Step 3 (extra info for me)

Identify the time of the last set of extras additions

select metadata_item_id, related_metadata_item_id, created_at from metadata_relations Where (metadata_item_id = 'xxxxx' ) order by created_at desc limit 20

Step 4

Delete the linked extras (example here is for more than one movie with lots of extras)

DELETE FROM metadata_relations WHERE (metadata_item_id IN  (xxxxx, xxxxx))

Step 5

Run the sql script to purge orphans

BEGIN TRANSACTION;

DELETE FROM metadata_items
WHERE guid LIKE '%api.internetvideoarchive%'
    AND id NOT IN (
        SELECT related_metadata_item_id 
        FROM metadata_relations
    )
;

DELETE FROM media_streams
WHERE media_item_id IN (
    SELECT media_streams.media_item_id 
    FROM media_streams
    INNER JOIN media_items ON media_streams.media_item_id=media_items.id
    WHERE media_items.metadata_item_id NOT IN (
        SELECT id 
        FROM metadata_items
    )
)
;

DELETE FROM media_parts 
WHERE media_item_id in (
    SELECT media_parts.media_item_id 
    FROM media_parts INNER JOIN media_items ON media_parts.media_item_id=media_items.id
    WHERE media_items.metadata_item_id NOT IN (
        SELECT id
        FROM metadata_items
    )
)
;

DELETE FROM media_items
WHERE metadata_item_id NOT IN (
    SELECT id
    FROM metadata_items
)
;

COMMIT;

VACUUM;

Does the reversal of steps you mentions affect what I have here?

1 Like

It does not, what you have is basically what I used the other day (minus the use of a transaction). Specifically, your DELETE FROM metadata_items ... query in step 5 is different from the one @shayanzadeh provided and is what I ended up running to correctly clean up that table (and what allowed the queries against media_streams, media_parts, and media_items to actually work)

This is simply insane for most domestic users to even contemplate doing. Sorry.

A fix is being produced and will also cover removal of database items. So only need to do this if situation is desperate and cannot wait for the fix

1 Like

Thanks to everyone on this forum thread for your help in getting us to find the problem. I am pleased to say that after getting the logs from @spartan018 the problem has been identified and a fix is being produced which will remove the duplicate items in a db migration and then leave the longer task of removal of orphans to scheduled task

3 Likes

Bless you!

Awesome, glad I could help get this resolved. The engineer in me would love to know what the root cause was, even at just a high level.

3 Likes

It looks like there was some bad attributes for the extras which led to missing the trailer initially and then start of adding extras.

We are adding checks to ensure we do not pick extras without identifiers indicating if they are trailers or scenes etc

I am hoping for the fix to make it into version 1.25.3 and in addition to fixing the bug, the solution will also remove the extras where there are duplicates, as part of a database update on launch and then during scheduled tasks as part of the task to optimize the database every few days, for an extra action to be slotted to purge orphans - limiting to I think 100,000 at a time max, So the orphans will get cleared over a number of days and database size reduce as more orphans get removed

Extras will be re-added when metadata gets refreshed

4 Likes

Agreed, thanks for sharing!

I have a Plex Media Server development alpha build available now with a fix for the massive number of extras (trailers / clips / scenes) problem that result in massive increase to the database size and crashes due to memory allocation failures and hangs

Any one want to try out this build, please let me know and indicate which platform / build it is needed for and I will make it available

1 Like

Anyone having a massive count returned for the first listed item from this SQLITE db query on the PMS database, is welcome to try out the alpha build

select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20

This lists the count of extras in descending order and any with hundreds of thousands in the count in the first column displayed in the query result should try out the fix

1 Like

Synology DSM7. I don’t feel comfortable doing the SQL…

How would I get this on my unraid docker ? :slight_smile:

Shut down Plex Media Server
Locate where the Plex Media Server database is
See Where is the Plex Media Server data directory located? | Plex Support

The database file is in sub-directory “Plug-in Support\Databases”
and filename (s)

com.plexapp.plugins.library.db

and if server not shutdown cleanly there would also be

com.plexapp.plugins.library.db-wal
com.plexapp.plugins.library.db-shm

You can run sqlite3 on a desktop or use easy gui windows tools such as SQLiteStudio to open the database and run the sql query

hmm… What if I just disable extras completely and then do to full refresh ?
Have 41 500 items in my movies library, and it keeps hanging.

Yes - you can do that - disable extras on the library and then refresh metadata
It may take a very long time to process that update - may be more than one day to purge all the extras

We do not even know if you have this issue

You can download the database through Plex Web and send it to me giving me a link via PM to where you uploaded the zip and I can check next week if it is this issue

Well… It can’t get worse at least :wink: hehe.
I will try to purge the extras and see if it helps, and monitor the stability of the server.

I have a new alpha build with the fix and avoids the delays seen on NAS for users with massive extras.

I can send it to users that have the high number of extras as showing when doing this query on the database

select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20

The alpha build willl be 1.25.4.5396 or later

(I would not be able to provide an alpha update process for docker - so for other platforms only)

Worked just disabling the extras and refreshing the library :slight_smile:
Seems stable. Even disabled automatic docker restart

Hi sa2000. Would it be ok for me to send you my database? I’m having an issue on TrueNAS where I wake up each day to Plex being killed due to the machine running out of swap space, when memory contention shouldn’t be a problem. I think it might be this.