Slow Query time - unable to scan or play media

Library scans are mostly normal at 30-60 seconds usually which should be about normal.

The 30 minutes is for the optimize DB (sorry, said DB scan yesterday). I’ve read this should be no more than 60 seconds usually, if that.

Total movies: 4618
Total TV episodes: 29,137

Optimize DB takes 30 minutes?

Optimizing the DB is a vacuum; command.

If it’s taking that long, I must ask — Are you using SATA-2 or USB HDDs ?

Yeah, the DB speed has been that way for at least a few weeks prior to posting my orig. message here.

Using internal SATA drives, no USB ones. I have multiple Proxmox hosts and have moved the VM around as well as the storage and it continues to take roughly the same amount of time to Optimize the DB.

At this point, after testing the faster internal SSD drive, I’m still not seeing any real improvement on client access or the Optimizing DB command, so I feel the issue is in my DB file. Sadly, I’m not a DB person at all so poking through it will just be worse than not, lol.

Would it help to upload my DB file, or is that a bit too much to handle at 6GB?

I really only care about my user’s watch status, so I’m thinking about maybe just starting anew and pulling in the watch history as another option.

Move Viewstate/Ratings from One Install to Another | Plex Support

@jakehoms

SATA-3 SSD vs SATA-3 HDD ?

SATA-3 SSD has a read speed of 535 MB/sec
SATA-3 HDD has a max read speed, for the best drives, of about 250 MB/sec when at the outer cylinders; less as the drive fills up.

I can’t believe SSD didn’t provide a at least a 2x performance improvement.

I use SSDs here all the time. Only my media is on HDD.

Can we take a step sideways? It might be slow, but why is it BIG, and why did it get so much bigger so quickly?

There’s a similar thread over here.

Slow library performance due to increasingly growing database

I’m wondering if you’d run a few of those same queries to identify which parts of the database have gotten so big.

Either sqlite3_analyzer or this to start:

sqlite3 com.plexapp.plugins.library.db "select name, sum(pgsize) as size from dbstat group by name order by size desc limit 20;"

That’s a DARN good question.

I’ve never seen one so big except for one case where a user was hitting the API with some really weird calls … which actually got recorded into the history!

As it turned out, he was developing a plug-in and was hitting the API incorrectly.

Thinking out loud…

Have the Media-item-settings been extracted (the watch history),

That extracted data examined to see how much it is and, if normal size,

A new (test) server created then data reinserted?

Thanks again to you both for your continued help on this! Yes, the DB being this large is what I’d like to troubleshoot if possible.

I viewed the other thread with the user’s DB having a bloat issue, and here are my results of the tests:

[root@plex testing-new]# sqlite3 com.plexapp.plugins.library.db “select name, sum(pgsize) as size from dbstat group by name order by size desc limit 20;”
Error: no such table: dbstat

But this one is even more disconcerting! It’s a huge output so I threw it in a pastbin.
Plex-db-issues - Pastebin.com

thetvdb shouldn’t fill that much, right?!

[root@plex testing-new]# sqlite3 -header com.plexapp.plugins.library.db “select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20;”
count(metadata_item_id)|metadata_item_id
2097155|519628
50|2862
50|6051
50|8727
50|61023
50|421380
50|423490
50|429507
50|476765
50|629522
49|450877
48|422688
48|519146
47|510466
45|452830
45|510054
44|2694
44|420783
44|433202
43|272564

[root@plex testing-new]# sqlite3 -header com.plexapp.plugins.library.db "select count() from metadata_items where guid like ‘%api.internetvideoarchive%’ and id not in (select related_metadata_item_id from metadata_relations);"
count(
)
12

I don’t see anything alarming about the TVDB entries.

(The number of TVDB rows seems reasonable. You’re still using the old agents on some libraries; the guids are formatted differently; I didn’t account for the different formatting in my ugly SQL query.)

But you do also have an unexpected number of video extras - this row is abnormal:

count(*), metadata_type, [guid prefix]
2124489|12|iva://api.internetvideoarchive.com/

This looks like it might be the same problem the other guy is having. Tons of extras associated with a single media item.

Let’s figure out what that item is with TWO MILLION extras associated:

sqlite3 -header com.plexapp.plugins.library.db "select id,library_section_id,guid,title from metadata_items where id = 519628"

@OttoKerner - I think you mentioned you shared the other issue with a dev? This looks like the same thing to me.

1 Like

Thanks for the detail @Volts!

[root@plex testing-new]# sqlite3 -header com.plexapp.plugins.library.db “select id,library_section_id,guid,title from metadata_items where id = 519628”
id|library_section_id|guid|title
519628|3|plex://movie/5d7770dd7a53e9001e7aceaa|Coyote Lake
[root@plex testing-new]#

Looks like it’s associated with Coyote Lake. Funny, that doesn’t show up in my Plex media, but I do see a video file for it.

Should I just delete the video file and do a rescan, or does this require an edit to the DB file?

It’s very interesting that you don’t see it in the Library.

It seems like it makes sense to remove it and scan/empty/clean bundles/optimize DB.

But it might take a long time to remove 2 million extras. Make sure you have Debug logging enabled before you do it, and maybe do it before you go to bed - in case it takes a while?

I’m living on the edge and doing it now :slight_smile:

Deleted the file and it’s still scanning the movies for “subs”

Scanning movies:
subs

Once done I’ll do the other steps and report back.

Whoa, cancel that! Viewing the Plex Media Server.log while it was scanning the movies, it was removing entries related to “Coyote Lake” about one every 3 seconds, so this would take weeks, if not months to remove all 2 million.

Is there instead a way to edit the DB and just delete anything that matches “Coyote Lake”?

Hey @Volts or anyone - any thoughts to my question above about if we can just edit the DB directly and search/and/delete anything that matches “Coyote Lake”?

If I just let it do it via the plexmediaserver process, it will take months (literally) to delete all of the 2 million entries. :frowning: . Thanks for any thoughts!

Read this thread and post. I can review more later, but It may give what you need.

He gave the SQL commands he used. They may be appropriate for you as well.

Thanks @Volts for the info! It appears to have cleared out the 2 million+ extras and I can now scan the libraries again w/o it hanging up the system.

The DB is still large at 6.4GB (down from 6.6GB) and optimizing it still takes roughly 30 minutes (and hangs the system during this time).

I’ll have to let it run for awhile to see if any DB errors report back though.

Overall, it’s a good step in the right direction!

2 Likes

To update anyone else having a slow DB issue and/or a large DB filesize, the search + fix was a combination of two items. Most of the info below is above, but this hopefully summarizes it:

  1. Identifying an item in the DB that had over 2 million associated files

  2. Editing the DB to remove all those entries

Step 1 was given by @Volts using this command:

sqlite3 -header com.plexapp.plugins.library.db “select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20;”
count(metadata_item_id)|metadata_item_id

My output was as follows:

sqlite3 -header com.plexapp.plugins.library.db “select count(metadata_item_id),metadata_item_id from metadata_relations group by metadata_item_id order by count(metadata_item_id) desc limit 20;”
count(metadata_item_id)|metadata_item_id
2097155|519628
50|2862
50|6051
50|8727
.....

ID 519628 had over 2 million items associated with it.

Step 2 was found in this link:
https://forums.plex.tv/t/slow-library-performance-due-to-increasingly-growing-database/740511/28

Based on that link, the super helpful @volts gave me the following details which I put in a file called sql.txt.

BEGIN TRANSACTION;

DELETE FROM metadata_items
WHERE guid LIKE '%api.internetvideoarchive%'
    AND id NOT IN (
        SELECT related_metadata_item_id 
        FROM metadata_relations
    )
;

DELETE FROM media_streams
WHERE media_item_id IN (
    SELECT media_streams.media_item_id 
    FROM media_streams
    INNER JOIN media_items ON media_streams.media_item_id=media_items.id
    WHERE media_items.metadata_item_id NOT IN (
        SELECT id 
        FROM metadata_items
    )
)
;

DELETE FROM media_parts 
WHERE media_item_id in (
    SELECT media_parts.media_item_id 
    FROM media_parts INNER JOIN media_items ON media_parts.media_item_id=media_items.id
    WHERE media_items.metadata_item_id NOT IN (
        SELECT id
        FROM metadata_items
    )
)
;

DELETE FROM media_items
WHERE metadata_item_id NOT IN (
    SELECT id
    FROM metadata_items
)
;

COMMIT;

VACUUM;

I then stopped Plex, backed up my DB file then ran this against it:

/usr/lib/plexmediaserver/Plex\ Media\ Server --sqlite /var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db < sql.txt

About 30 minutes later it was all done and I then had a 590Mb DB vs. 6.6Gb! I haven’t had an issue since and it’s been much zippier scrolling around.

Thanks again to @Volts and @ChuckPa for your time and assistance in this matter, and I hope it helps others who have a similar experience.

1 Like

Sweet, I’m glad that worked for you! Thanks for sharing back to the thread.

Credit to @shayanzadeh too, I stole the majority of their pretty SQL. The change I sent you (and you included above) is a bigger shotgun, removing any non-referenced extras.

@OttoKerner you mentioned in the other thread that you shared it with the devs; pinging you here in case this is worth attaching to the same issue.

2 Likes

Done. Thank you!

Hi

I am starting an investigation into these issues and I will need to establish if this was the impact of an old bug or whether it is still happening and we add loads of extras

Would like to get a zip of the database before any manual purging is done if it is, to get debug server logs for may be 24 hours or more to see if they cover the time when these extras were added

May need to run with large number of log files for a short while - see advanced setting LogNumFiles in this support article Advanced, Hidden Server Settings | Plex Support. Cap it at 50 log files as there is a performance issue when switching log files as each fills up

If it is still happening and you can get me the diagnostics, please send by private message providing the logs zip and link to where the zipped db was uploaded to ((dropbox / google drive etc)

Thank you

2 Likes