I move files around a lot on my plex server (I have my hot fast local storage and colder NAS storage). These end up in 2 different directory structures, but plex handles that fine. The “problem” is that plex can’t remember that it already scanned this file for its skip opening/ending credits, so it does it again.
This is compounded by the fact that plex (for me) runs on a machine without hardware transcoding (i.e. a rackmount server), so while files get added to the hot storage relatively slowly and that doesn’t have a big impact on the server, when they get moved to the colder NAS storage, it generally happens in bulk and therefore pegs the CPU and fans on the server.
This just seems like a waste of effort and resources (and annoyance to me re fan). Are there good reason that plex can’t save a metadata snapshot of the file (name/size, perhaps even track level metadata) and if a “new” file it sees in the scan matches that snapshot, it can reuse the data it already calculated. In addition, even if the file would temporarily disappear from the DB (simple example, it scans dir 1 first and file is no longer there, so its removed from DB and then it scans dir 2 and file is there, so file is readded, but the delta in time could be longer that), this data should be kept around for some period of time (perhaps configurable by the end user) before its purged from the DB.
As a simple implementation, the skip metadata would know which files reference it and the files would know their skip metdata. When a file is removed, it will remove itself from the list of files that care about this metadata (and when added, would add themselves to it). If the skip metadata is without any files referencing it for the defined period of time only then will it be removed from the DB.