General slowness of my server on Plex Web

When I go to my server home page, the images load, but they take a while to load, it’ll stick on grey posters for a good 10seconds before they appear. An example of this is below.

Also, when going into each Library, the “recommended” page takes significantly longer to load than “library”, “collections” or “categories” so I think its linked with the above.

I am using UNRAID, and my Plex Metadata folder is on its own SSD with nothing else occupying that drive, my Metadata folder stands at about 130GB in size. I have a fairly large Library but not horrendously large.

Is there anything I can do to speed this up a bit?

I am also seeing this in console

“Auto” will optimize the DB in a way PMS never can because this level takes PMS offline while performing the optimizations.

Thanks for the reply. I did use your tool, but I am still having the slow QUERIES.

What could the cause of these queries be?

I also see a lot of these in the console logs when looking at Libary > Recommended page (as its loading)

[Req#319b0b] Unknown metadata type: folder

If you’re getting slow queries,

If you’re still getting slow queries then:

  1. The I/O is slow
  2. The size of the DB (number of items indexed), in relation to the cpu speed, is too much for that CPU

Care to share some details?

How many indexed items?
CPU ?

Okay.

I am running UNRAID on a system that has 2x Intel® Xeon® CPU E5-2690 v2 @ 3.00GHz with 96GB of DDR3 RAM.

Is there an easy way I can check how many indexed items I have?

I do have a lot of other containers running, such as the *Arrs, Overseerr, Tautulli, Immich, Adguard

Is it worth having Plex container using pinned CPU’s so those cores aren’t being used by anything else other than Plex or is this pointless? If it is beneficial how many CPU cores would you recommend for Plex, and would you split the cores across two CPU’s?

How many media items do you have ?

The E5-2690 v2 is only slightly slower than my E5-2690 v4 per thread

Trying to micromanage the CPU pinning will buy you nothing.
The containers you mention are of no consequence.

I run them as well (Sonarr, Radarr, w/ NZBget).
They have no impact on the overall CPU

I have almost 15000 items indexed with no hint of slow query.

My library consists of

Music - 76510 tracks over 5,000 Albums.
TV Shows - 2,300
TV Shows - Reality 3,100
TV Shows - Kids 720
Movies - Kids 900
Movies - Misc/Home Videos 3,000
Movies - 5,000

I’m wondering, could it possibly be the Music Library bogging down the server, and maybe putting the Music on its own Plex instance could help?

Here is my Database directory, there are some large filesizes here, for the BLOBS.db-WAL and the BLOBS.db, is this normal for them to be this size?

When’s the last time you optimized that thing?

The WAL and SHM files are WAY bigger than the actual DBs they go with.

Because the WAL & SHM exist, all those records are still pending.
SQLite must merge them with the base DB every time you do anything (like a query)

I have optimized both through Plex (also cleaned bundles), and also with your tool just yesterday. I have also ran Image Maid to remove any unused metadata posters, that deleted over 15,000 images, saving around 4GB of space.

When you say still pending, is it normal for that filesizes in the WAL to be pending, because I would think it should only be quite small if its passing the stuff to the db as it happens, so I’m confused to why are they still Pending?

What’s the best way forward for me right now, as from your replies I get the feeling that those WAL & SHM sizes are probably not normal?

When you run my tool, AUTO optimization will DELETE the WAL & SHM files

My tool will also remove pics and tmp files. (different command)

I do not understand how you can run my tool and have a WAL file which is larger than the DB that same day

Alright, well I’m gonna go ahead and run it now, and I’ll post the results :slight_smile:

The WAL (Write Ahead Lookup) is a cache file of the pending database commits.
The SHM (Shared Memory) is a buffer and should only be present when PMS is running. If it’s residual, then there was a crash/abrupt shutdown and the data didn’t get included.

When they do exist, they force , especially at these sizes, SQLite must look at all those records too — EVEN IF they conrtadict/supersede what’s in the DB.

About 5minutes has passed since I ran your tool and this is now the state of the Databases folder.

That looks soo much better.

Notice the WAL and SHM are now back to normal levels.

What I am seeing still is that your com.plexapp.plugins.library.db-wal is still growing at a huge rate.

What are you running that’s modifying the database so much?

There is nothing running that could be making this so big. What could be running for it to do this?

These are the only Webhooks I have linked to the server.

Its been a couple hours since the tool ran and this is the current state

Let it run as is for a day / use it normally .

Look at the WAL and SHM to see if they hold at approximately their current sizes.

If they do then you’re ok.

Current sizes

I am still getting Recommended tabs loading really slowly. Sometimes minutes. Most times a forced refresh of the page is needed. This happens on every single device.

What is causing this behaviour?

Your DB’s look good.

Best I can recommend at this point is the cleanups that DBRepair offers
( Purge & Prune )

After that, you have to consider what you’re displaying on , the network link, and the ‘circuit’ from server → player. (all the pieces inbetween)

The hard part here is “General Slowness” is not a failure. It’s subjective.
How do you quantify “slow” ?

I would consider 5 minutes to show the “Recommended” page on a Library very slow, I’ve never known it take such a long time, sometimes it just spins forever, until its refreshed, then it would load instantly.

I’d make a new tab, same behaviour.

I’ll look into the prune and purge feature.