Linux gemini 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1+deb10u1 (2020-04-27) x86_64 GNU/Linux
I had recently started using plexapi to mark some shows watched automatically and tie into deleting trash as I use some cloud storage for my media which is all scripted together and working well.
I had noticed that my navigation was going slowly and dug into a bit more and noticed my database has had some massive growth at some point and was hitting near 2.5GB in size:
and went through and deleted the old metaviews which seemed to be 5 million entires and the source of the space consumption when I sqldiff’ed the original vs the new:
sounds like someone has been swimming in the deep end of the pool with the lights off
What I’m seeing above, albeit only a snippet, tells me there have been a lot of metadata changes & deletions.
The butler, by default, will run during scheduled maintenance and vacuum (compact) as it does backups of the DB.
This implies to me that the butler isn’t running for the DB / PMS or computer are shutdown when maintenance should run.
The “API” isn’t concerning here (aside from the number not looking like the right version).
I’m concerned what’s deleting so much.
What is generating the system activity? “Refresh all metadata” or ???
We need to understand the source of the activity and shine some lights on that first
I have all my scheduled tasks on and they run nightly as the server is on 24x7.
That’s a bad explanation by me. I used the linked post to delete all the metadata views to reclaim the 2.1GB of DB size back. I was comparing the large DB to the reclaimed DB as the 200MB was more of what I expected to see on size.
Here is the normal growth over 6 days from size:
felix@gemini:~$ ls -al /data/rsnapshot/daily.*/localhost/var/lib/plexmediaserver/Library/"Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db"
-rw-r--r-- 1 plex felix 2634797056 May 4 09:19 '/data/rsnapshot/daily.0/localhost/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
-rw-r--r-- 1 plex felix 2572644352 May 3 02:06 '/data/rsnapshot/daily.1/localhost/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
-rw-r--r-- 1 plex felix 2515505152 May 2 01:41 '/data/rsnapshot/daily.2/localhost/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
-rw-r--r-- 1 plex felix 2450825216 May 1 01:38 '/data/rsnapshot/daily.3/localhost/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
-rw-r--r-- 1 plex felix 2406170624 Apr 30 01:47 '/data/rsnapshot/daily.4/localhost/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
-rw-r--r-- 1 plex felix 2340978688 Apr 29 01:33 '/data/rsnapshot/daily.5/localhost/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
-rw-r--r-- 1 plex felix 2274791424 May 4 09:19 '/data/rsnapshot/daily.6/localhost/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
Here is the sqldiff from 6 days ago to the backup from last night as an example:
Overview
Unofficial Python bindings for the Plex API. Our goal is to match all capabilities of the official Plex Web Client. A few of the many features we currently support are:
It’s a small setup as that hasn’t changed minus some slightly increased views with the whole lockdown, but nothing that large as it’s a few family members and some close friends.
The change I made was automating a few things via that link you have above, which seems to be the culprit.
I move files from local to cloud storage and empty trash is off by default. I was using curl to delete libraries and I had moved function over to PlexAPI
I have something that marks shows watched that other folks might have added that I don’t so they don’t show up
I can move the first back to curl as I know that works without much of an issue. I have to find a different solution for the second.
I’m surprised other folks using that have not noticed it before as I can’t imagine I’m running something that crazy. I did log a post with them as well.
Does this mean you are deleting and recreating whole libraries in Plex on a regular basis?
If you do, I’d say this is a major factor in causing slowness. If you can change that to deleting and adding media items instead, it might improve responsiveness greatly.
Apparently Monday morning and written English are not working for me as that’s just a bad way for me to explain it.
I’m not deleting libraries as the use case is a mergerfs combo of a local mount and cloud mount.
At times, media gets upgraded so it’s deleted from the cloud mount and added to the local. This is all transparent to Plex so I use the API to empty the trash as I do not want to automatically empty the trash as with a disconnect or something goofy, that would delete all the cloud items since they are not present.
If do the math out, I’m getting roughly 700k per week, which matches up my 6 day growth pretty close. At some point if I was following you, the butler processes would eventually clean those up after ‘x’ days and my volume of generation was more the issue than a maintenance task not firing off.
I can go back and check the old DBs I have to see when the last records were in there as I have the last 7 days so I wanted to validate this so I can address it from my side.
If Plex is operating normally (not being manipulated by an external application), the normal growth observed from ingesting & updating metadata records is handled by the butler in its vacuuming (what it’s designed for).
Yep, makes sense. I was basically making the script click on 25 shows every 15 minutes and mark them watched so that is a bit outside of what a normal human would do.
I setup a few monitors to make the size does not grow as I am fully confident on the root cause now.