Adding metadata via API takes way too long with recent Plex server versions

I am looking at a , very likely, related problem right now.

I’ve just obtained the databases and am setting up a controlled test machine for debugging.

We did discuss a significant latency today in staff meeting which is what I’m looking at now (to give to Engineering)

7 Likes

ALL:

To update you on my progress,

I’m creating a new test system instance. Loading ran overnight and is nearing completion. (a few more hours :crossed_fingers:)

  1. Loading a 9000+ movie library complete - Plex/Web page load time 3-4 seconds
  2. TV library section - 271,233 episodes and counting – Plex/web load time 3-4 seconds.
  3. Scrolling top-> bottom and filling in the posters - another 3-4 once I let go of the mouse.

Can anyone provide me with a “HOW TO REPLICATE” of the problem ?

I will also need environmental info:

  1. Number of items indexed in Plex (resource sizing)
  2. Host OS and CPU being used for the server.

I will champion this but need to confirm it myself :slight_smile:

Hey @ChuckPa, glad to see you’re taking lead on this.

I think the majority of people in this thread are coming here because they use the plex-meta-manager utility (GitHub - meisnate12/Plex-Meta-Manager: Python script to update metadata information for items in plex as well as automatically build collections and playlists. The Wiki Documentation is linked below.). I realize this is a third party tool, but it was working swimmingly up until these most recent PMS versions, at which time it took a massive downward turn in terms of how long it takes to run.

In case you’re not familiar with it, plex-meta-manager creates custom collections and poster overlays, and has a ton of different configuration options. If you get it up and running, I can provide you with my config file so you can see a bit of what I do with it, and maybe others can provide theirs (redacted) as well.

1 Like

The easiest way to test this is on unraid with one of the plex versions installed.

I use binhex_plex-pass.

Then in the app store install plex meta manager. set it up and run it.

If plex staff do not have resources to fix these kind of issues you could consider going open source.

Taking a quick look at the GitHub,

BEFORE I dig into this,

  1. It’s Python
  2. Python support in PMS was deprecated 2+ years ago and is now being discontinued. (all those old hooks are going away)

QUESTION: Does PMM use the server’s cURL endpoints or does it try to go through the Python layer ?

OBSERVATION: I notice it allows setting the DB Cache Size.
DANGER, WILL ROBINSON!!! I’ve seen this screw up so much when misused.

My biggest fear here – micromanaging PMS – is causing more harm than good.

My understanding is that the DB Cache Size is simply an option to display the cache size that is set in Plex, so that it will display in the logs and makes it easier to troubleshoot issues.

Python is going away? I had no idea. Isn’t Tautulli also written in Python? Does that mean all these tools will stop working? I hope the developers are aware of this, since I haven’t heard any discussion about it. Do you know when the Python hooks will stop working?

My understanding (and again I’m a user, not a developer) is that these tools all use the Python Plex API, which is here: Table of Contents — Python PlexAPI documentation.

Will this continue to work?

1 Like

My understanding is that PMM uses python-plexapi, which is another 3rd party tool which acts as a translation between Python and Plex’s cURL API endpoints. This isn’t being deprecated to the best of my knowledge.

The issue here is that for some reason the API endpoints used by python-plexapi (and in turn PMM) for updating Plex metadata have taken a performance nosedive in recent versions.

1 Like

Yes, All of PMM’s communication with Plex happens via the PlexAPI python library linked by Departed69.

I’m curious what about the tool is considered “micromanaging”? It’s basically creating collections/playlists and setting artwork, both things that the UI supports, querying Plex for bits of information on the way and caching that stuff to avoid talking to Plex as much as possible. Is it the scale?

Here’s a minimal case for one issue:

from plexapi.server import PlexServer

PLEX_URL='https://bing.bang.boing/'
PLEX_TOKEN='BINGBANGBOING'
PLEX_LIBRARY = 'Movies'

# Create a PlexServer object
plex = PlexServer(PLEX_URL, PLEX_TOKEN)

the_lib = plex.library.section(PLEX_LIBRARY)

try:
    results = the_lib.search(label="DOES_NOT_EXIST")
    print(results)
except Exception as ex:
    print(ex)

If you run this against a server running Version 1.29.2.6364, it returns an empty array.
Run against Version 1.32.7.7616, it returns a 500 Internal Server Error.

The movie library here contains about 5500 items.

Same thing happens via curl:

curl https://1.29.2.6364-plex-server/library/sections/18/all\?includeGuids\=1\&label\=DOES_NOT_EXIST\&X-Plex-Token\=BINGBANGBOING
<?xml version="1.0" encoding="UTF-8"?>
<MediaContainer size="0" allowSync="1" art="/:/resources/movie-fanart.jpg" identifier="com.plexapp.plugins.library" librarySectionID="18" librarySectionTitle="Movies" librarySectionUUID="abf2ed4b-55ff-4969-bc64-0105ed12f369" mediaTagPrefix="/system/bundle/media/flags/" mediaTagVersion="1667296136" thumb="/:/resources/movie.png" title1="Movies" title2="All Movies" viewGroup="movie" viewMode="65592">
</MediaContainer>

vs:

curl https://1.32.7.7616-plex-server/library/sections/18/all\?includeGuids\=1\&label\=DOES_NOT_EXIST\&X-Plex-Token\=BINGBANGBOING
<html><head><title>Internal Server Error</title></head><body><h1>500 Internal Server Error</h1></body></html>
3 Likes

I’m not an endpoint expert by any means.

To me, this looks like it’s internally bailing out and throwing 500 for an empty-set (null) query.

If this is true then it’s a regression.

Am I reading and understanding correctly?

That would be my understanding; nothing has changed but the version of Plex. 1.29 Plex says “here’s your empty result set”, while 1.32.7 Plex throws a 500 which to my mind is a regression.

I too am having this issue and had to roll back and keep on 1.29.x :weary: please look into this

I’ve asked that of Engineering.

Totally separate topic —

I completed building and testing a DB-Plex/web performance test
The complaint is that Plex/web requires 15+ seconds to load.

Using that customer’s DB,

  1. Extracted the filenames
    – 9536 movies
    – 271918 episodes
  2. Created directory tree using exact structure
  3. Created new server using all defaults
  4. Results
    – DB is 1/2 the size
    – Performance near instantaneous (posted a video of it)

Something is mucking up the DB badly.

While I can’t prove it, NOR AM I IMPLYING, SOMETHING is causing it … So what can it be?

That’s why I’m here! :roll_eyes:

LOL

2 Likes

Schema changes and db upgrades along the way?

What if you built the DB on a 1.29 Plex server and then take it through every upgrade of PMS until the current version and see if something went awry along the way?

At the end of the day, I’d love to have my PMS performance back again and half the db size would be great as well.

Furthermore, Plex agents, metadata, credit detection, intro detection, thumbnail creation… all of this information is adding to the db, no?

Maybe some of that contributes to DB bloat as well?

Plex does kind of have its own Plex meta manager that makes headings for a collection of tv or films BUT they only show under recommended and it’s all random. Perhaps theres some scope to build this into a proper function that users can customise.

Like just now I noticed its showing ‘Fright Night Features’ a collection of all my ‘scary’ movies.

I think WHY we use plex meta manager is it creates collections based on these groups of data plus a load more.




VS the obscure plex created recommendations section:

It was pointed out in an earlier post that there were more efficient ways to perform multi-item edits and that the specific tool mentioned was not using them. A follow-up post by the developer of Tautulli provided a link to the specific changes in python-plexapi to accommodate these changes. Was there a follow-up from the developer acknowledging this and stating that they would use the updated API?

STOP THE PRESSES! :rofl:

Just had a chat with one of the engineers…
Please do not :see_no_evil: :gun: the messenger

IN TESTING – Target for 1.32.8

:crossed_fingers:

If I understand correctly, this will fix this issue by allowing

  1. Null (empty) query result
    –and–
  2. Requesting GUIDS against a null solution set (the 500 error)

CAVEAT

I might be misunderstanding this a bit too so PLEASE do not bite my head off if I’m not stating correctly.

Thanks for the examples to test with.

One of the Engineers is going to test against PMM and report back.
Another is going to test with the examples given here and show results.

I am also passing on a message –

BE ADVISED

PMM keeps HAMMERING on the API and is the root cause of the slow downs.
This is a problem with PMM which needs to be fixed.

Anyone for :beer: and :popcorn: ? :slight_smile:

EDIT: @pshanew Sorry, I didn’t see your post until after mine.

Thanks for the positive note. The dev for PMM has mentioned that it’s on the roadmap. So batch edits is the way to go.

I think that what we are trying to point out is that we were hammering the 1.29 version equally and it was fine. Was it optimal…no. From a testing perspective, we saw a general and significant performance degradation when moving off of 1.29 with 1.32.7.* having the most performance degradation and issues. 1.31.2.6810 seems to be the last “decent” version post 1.29 however still has degraded since1.29.

Thanks again for your attention on this matter.