I will follow up with all the diagnostics you provided tomorrow - but would like confirmation if I need to change any of the instructions that I have
Step 1
Identify if the issue relates to extras and identify the metadata id’s affected
select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20
Step 2 (extra info for me)
Identify the movie and guid (for each of those returned above with lots of extras)
select id, title, guid, refreshed_at from metadata_items Where id = 'xxxxx'
Step 3 (extra info for me)
Identify the time of the last set of extras additions
select metadata_item_id, related_metadata_item_id, created_at from metadata_relations Where (metadata_item_id = 'xxxxx' ) order by created_at desc limit 20
Step 4
Delete the linked extras (example here is for more than one movie with lots of extras)
DELETE FROM metadata_relations WHERE (metadata_item_id IN (xxxxx, xxxxx))
Step 5
Run the sql script to purge orphans
BEGIN TRANSACTION;
DELETE FROM metadata_items
WHERE guid LIKE '%api.internetvideoarchive%'
AND id NOT IN (
SELECT related_metadata_item_id
FROM metadata_relations
)
;
DELETE FROM media_streams
WHERE media_item_id IN (
SELECT media_streams.media_item_id
FROM media_streams
INNER JOIN media_items ON media_streams.media_item_id=media_items.id
WHERE media_items.metadata_item_id NOT IN (
SELECT id
FROM metadata_items
)
)
;
DELETE FROM media_parts
WHERE media_item_id in (
SELECT media_parts.media_item_id
FROM media_parts INNER JOIN media_items ON media_parts.media_item_id=media_items.id
WHERE media_items.metadata_item_id NOT IN (
SELECT id
FROM metadata_items
)
)
;
DELETE FROM media_items
WHERE metadata_item_id NOT IN (
SELECT id
FROM metadata_items
)
;
COMMIT;
VACUUM;
Does the reversal of steps you mentions affect what I have here?
It does not, what you have is basically what I used the other day (minus the use of a transaction). Specifically, your DELETE FROM metadata_items ... query in step 5 is different from the one @shayanzadeh provided and is what I ended up running to correctly clean up that table (and what allowed the queries against media_streams, media_parts, and media_items to actually work)
Thanks to everyone on this forum thread for your help in getting us to find the problem. I am pleased to say that after getting the logs from @spartan018 the problem has been identified and a fix is being produced which will remove the duplicate items in a db migration and then leave the longer task of removal of orphans to scheduled task
It looks like there was some bad attributes for the extras which led to missing the trailer initially and then start of adding extras.
We are adding checks to ensure we do not pick extras without identifiers indicating if they are trailers or scenes etc
I am hoping for the fix to make it into version 1.25.3 and in addition to fixing the bug, the solution will also remove the extras where there are duplicates, as part of a database update on launch and then during scheduled tasks as part of the task to optimize the database every few days, for an extra action to be slotted to purge orphans - limiting to I think 100,000 at a time max, So the orphans will get cleared over a number of days and database size reduce as more orphans get removed
Extras will be re-added when metadata gets refreshed
I have a Plex Media Server development alpha build available now with a fix for the massive number of extras (trailers / clips / scenes) problem that result in massive increase to the database size and crashes due to memory allocation failures and hangs
Any one want to try out this build, please let me know and indicate which platform / build it is needed for and I will make it available
Anyone having a massive count returned for the first listed item from this SQLITE db query on the PMS database, is welcome to try out the alpha build
select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20
This lists the count of extras in descending order and any with hundreds of thousands in the count in the first column displayed in the query result should try out the fix
Yes - you can do that - disable extras on the library and then refresh metadata
It may take a very long time to process that update - may be more than one day to purge all the extras
We do not even know if you have this issue
You can download the database through Plex Web and send it to me giving me a link via PM to where you uploaded the zip and I can check next week if it is this issue
Hi sa2000. Would it be ok for me to send you my database? I’m having an issue on TrueNAS where I wake up each day to Plex being killed due to the machine running out of swap space, when memory contention shouldn’t be a problem. I think it might be this.