I am trying to find a specific file (where I know the directory / file name) is there an easy way to locate this files metadata ID without having to traverse the entire library?
For example the only thing I know I should be able to do now is:
API -> Locate which library the media is in
API -> List metadata in library
Regex the series / movie name out of the directory
Movies: basically found at this point
Series:
API -> List children in show & locate correct season
API -> List children of season & locate correct episode
The above can be quite a few API calls to just get a single file, mapping the database in this instance is not ideal because it is likely to go out of sync by the time I need the media.
I guess the next question would be, is there a way to trigger a re-analyze without calling the API? I have a problem where I encode files and they do not playback until after they are re-analyzed (due to things like audio tracks changing) and plex not detecting this.
I do have it enabled, however playback will be broken until it runs some time between 2-5am
I lazily add the raw version (which can be played back) and encode it automatically afterwards, the encoding ensures that all my libraries share the same codecs for audio/video etc.
This process is due to wanting a consistently formatted library and tend to storage constraints, ultimately I might just have to make those API calls to try to get the metadata id, I was just hoping there’d be a route which might be more effective.
Edit: I should note that it appears to only really be an issue if the audio codecs change, not certain if that is accurate though.
If you change mid day, click the button.
PMS must be notified (the API) if the media changes mid day.
Regarding the ‘lazily adding’, perhaps automating your curating process will alleviate all this extra work? I have all my OTA television post-processed by automation before it’s placed in my media library.
I certainly have considered that, however a lot of the library was added prior to this so it is now a huge back log being processed slowly. Wanted to keep everything in there while it was being processed.
@dane22 yessir I know all the endpoints, more just the number of them you need to call just to get a single file is quite high and was hoping to reduce that number. Thanks!
When I used the search API it would only find top level metadata (so itd work for movies) but for series it would find the series but not the specific episode in a season (to my knowledge). It could help alleviate the issue by doing less total api calls which I’d totally agree.
if there is a way I can do something like Find X series Season Y Episode Z that would be great, I might just be missing the specific filters.
I was also debating just taking a copy of the database periodically and just querying it directly, though this is a less portable solution.
That’s a bit more long winded than what I do.
Typically:
index: = position in season
parentIndex = season number.
I request /library/metadata/[showid]/allLeaves?X-Plex-Container-Size=5000 and that returns all the episodes for that specific show.
So one API call the do the search and extract the showid (key) and then a second API call to list all the the available episodes for that show. Loop through the results matching parentIndex to the required season number and index to the episode within the series:
It’s also worthwhile - since you mention using Regex - I’m assuming you are requesting the API and getting XML back. You will find it much easier to work with JSON responses in whatever language you are using. If you aren’t currently using JSON - simply ensure that you request “Content-Type: application/json” in your request header.
Your script/language/process is presumably using the API to start with in order to select files to work on? Why don’t you just store the key of the item when you select it to be converted / processed. Then once the processing is complete - you already have the key so you can just straight to requesting the metadata for that specific item.
If you are selecting files to work on outside the API - then you can still create a program that will use the API to grab each items key, and write it to a dat file for each file, then when you work on the file you just open the dat file to get the key and you’ll have the direct path to the metadata.
I’ll look into modifying my code to cope with paging sometime soon, it never returns anything like 5000 results anyway, at most there might be 900 results. Most often there are less than 10 - depending on what I am searching for.
I haven’t noticed any adverse affects on any of my 3 Plex servers, which are all queried every hour to select a random show, and then a random episode from all available episodes, and then ffmpeg extracts a single frame from a random timestamp inside that episode. The image is uploaded to Twitter at @UKNoContextTV for absolutely no reason at all. None of the Plex servers show any resource issues from doing this. I don’t doubt however, that if there was actually 5000 results to return, performance may suffer.