Moving/renaming unrelated media in Linux always trigger a full Music rescan

Server Version#: 1.40.1.8227
Player Version#: (varies, mostly Roku)

I have a 58TB OpenZFS array mounted at /array.

In /array/media I have directories that contain Plex media such as:

Movies: Regular movies for 15 and under (109 files)
‘Movies Mature’ Movies for 15 and older (75 files)
Music: Music I’ve collected over the past 28 years. (146 gigs of 17153 mp3’s)
Pictures: Pictures I’ve taken over the past 28 or so years. (39105 files)
‘Tv Shows’: Regular tv shows 15 and under (291 files)
‘Tv Shows Mature’: Regular tv shows 15 and older (298 files)
Videos: Family videos (497 files)

Recently I purchased a Blu Ray that had a combo of movies on it (Death Note 1 and 2). I usually copy the files into another directory on the array straight from makemkv or handbrake, rename them, and then move them to the appropriate directory from above. However in this case I did not, I just put them in “Movies Mature”… which was just me being careless.

At any rate this caused one movie not to be seen due to my carelessness so I looked and it was not right so I created the following two directories in Movies Mature and put the appropriate mkv files in them

Death Note The Last Name - (2006)
Death Note The Movie - (2006)

Here’s where the problem happens: I’ve noticed over the past few versions of the Linux server (I’m running latest Ubuntu) that doing such a thing in a “movies” tagged directory automatically starts a re-scan of all my Music – which as you can see above has quite a number of files. This wouldn’t be so bad if it wasn’t so slow. When the music scanner is done it will eventually get to scanning the right directory.

But that’s not all: This can be particularly frustrating in the case where I move a new piece of media into any movies / tv shows directory by just moving the directory into the appropriate media directory. Again, there goes the MP3 scanner, which is annoying especially if I want to watch it … instead it takes idk a very long time… sometimes up to an hour or so for the completely unnecessary Music rescan operation to complete and when I look at the logs it’s literally re-scanning the whole library.

This didn’t happen like 3 versions ago. So I kind of think this is either a configuration error or a bug.

Please help.
Thanks,
Brian

I have a similar structure:

/vol/media/
  movies/
  music/
  tv/
  .
  .
  .

I don’t experience what you describe.

Have you:

  1. Settings - Library - Show Advanced
  2. Turn off Partial Scan & SAVE
  3. Turn on Partial Scan & SAVE
  4. Test

Hello Chuck,

Thank you for directing me to this page. My ‘partial scan’ was unchecked so I toggled it and then turned it on.

However, I noticed something interesting in the UI:

(x) Run a partial scan when changes are detected
When changes to library folders are detected, only scan the folder that changed.

(x) Include music libraries in automatic updates
Linux systems limit the maximum number of watched directories; this may cause problems with large music libraries.

The first and second option seem to be conspiring together to be problematic.

The thing is the second option does not really make much sense to me. I assume they are using inotify to watch directories and yes that can be problematic as it’s only 8192 files without fiddling.

However let’s just consider the worst possible case of scanning through my whole mp3 collection:

$ IFS=“” time find . -type f | while read line rest ; do ls -l $line ;done
… comes out to about 25 seconds … not 45 minutes …

so it seems Plex may want to consider alternatives maybe just saving the stat structure with it’s mtime, ctime, and atime along with size etc. etc. would be sufficient to avoiding deep introspection otherwise.

Just my $0.02

thank you very much.

Plex uses the kernel inotify service to be alerted of changes

Once triggered, it’s a classic readdir()-stat()-pushdir() loop

It will pull stat() on each entry in the dir. (the unavoidable part)
if the change is a subdir – it’s going diving
if the change is a file – it will process it

( granted, there’s more involved that what I write here but hope you understand)
– The “more involved” is what usually takes a whole lot of time dependent on what changes -OR- it finds needs to be done

So I returned to my place after doing something and, wouldn’t you know it, the Music directory is being re-scanned again. I didn’t even load anything onto the server. It was just a periodic rescan.

I do not know precisely why it’s doing this - and always for Music - but not for other files, or maybe those just are too few in comparison. idk.

So let’s see how my inotify is stacking up …

root@icarus:~/inotify-info# ./_release/inotify-info 
------------------------------------------------------------------------------
INotify Limits:
  max_queued_events    16,384
  max_user_instances   128
  max_user_watches     65,536
------------------------------------------------------------------------------
       Pid Uid        App                                Watches  Instances
    199126 999        Plex Media Server                    4,420          1

So, I used the ‘stat’ example in the man page to look at my directories. One directory is for reasons I don’t know empty, so that’s an interesting testcase.

Zoe Tiganouria
ID of containing device:  [0,27]
File type:                directory
I-node number:            1045
Mode:                     40755 (octal)
Link count:               2
Ownership:                UID=1000   GID=1002
Preferred I/O block size: 512 bytes
File size:                2 bytes
Blocks allocated:         1
Last status change:       Mon Feb 26 02:43:43 2024
Last file access:         Sat Mar 23 15:47:59 2024
Last file modification:   Thu Nov 14 20:48:31 2019

The last XXX times are:

printf("Last status change:       %s", ctime(&sb.st_ctime));
printf("Last file access:         %s", ctime(&sb.st_atime));
printf("Last file modification:   %s", ctime(&sb.st_mtime));

since ctime hasn’t changed in a while, nor mtime, only atiime has changed recently enough to trigger this sort of rescan, plus there aren’t even files in the directory anyways, so why is there even a change to atime (unless inotify does that).

I mean, I hate to ask, but is the code saving atime to the database?

  1. Plex wouldn’t be even reasonable if it triggered on atime . That makes no sense.
    (performing a ls -la on a directory causes a media rescan? lol lol :wink: )

  2. Plex only asserts notify event registration via user_watches.

  3. PMS is sensitive to chown and chmod activity (common with media ripping/downloading software.
    – Upon detection of either chown/chmod for a watched directory, PMS will check each changed item (sound familiar ?)

Q: Do you have a cron task which fires every X minutes ?

I unfortunately don’t know anything about ZFS other than I stay away from it.
My media, with mirror backup, is not so mission critical as to require ZFS imho given I’m running XFS with mirrored RAID 6

I just tested what you’re seeing and cannot replicate it, either over the LAN or on the main server. Might this be a ZFS related event handling? (XFS isn’t COW like ZFS)

Q: Stupid question part (which I’ve done myself … hence my asking). Have you checked all the directory references to make sure nothing is pointing to the wrong place and are no symlinks hidden?

No cron task with chown.

One thing that might be causing problems is that I use CrashPlan for my Cloud Backup Solution, which is always running, from what I can tell it doesn’t change stat at all.

ZFS has it’s ups and downs. Overall I rather like it as a filesystem. The various jobs it will run “in the background” are a tad annoying though. It has good redundancy semantics… which is important as I have 5 drives and will probably expand it to 10 within the year.

This is one machine.
Ubuntu 22, XFS, RAID 6 for the media

root@apollo:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            26G  4.5M   26G   1% /run
/dev/md0p1      464G   49G  415G  11% /
tmpfs           126G  4.0K  126G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           192G   42M  192G   1% /ramdisk
/dev/sda1       1.1G  6.1M  1.1G   1% /boot/efi
/dev/md4        3.7T  1.6T  2.1T  44% /vol/meta
/dev/md3         11T  8.1T  2.9T  75% /vol/audio
/dev/md2        164T  126T   39T  77% /vol/series
/dev/md1        164T   69T   95T  43% /vol/movies
tmpfs            26G   72K   26G   1% /run/user/127
tmpfs            26G   56K   26G   1% /run/user/1000
root@apollo:~# 

md0 = RAID10 w/ parallel boot
md1 = RAID6
md2 = RAID6
md3 = RAID5 (music SSD)
md4 = RAID0 (PMS metadata)

No matter what gets moved around, nothing triggers a music rescan.

Wow, XFS, that brings back memories.

I use OpenZFS. This is how I run things.

brian@icarus:~$ df -k
Filesystem       1K-blocks        Used   Available Use% Mounted on
tmpfs              3262772        4184     3258588   1% /run
efivarfs               192         134          54  72% /sys/firmware/efi/efivars
/dev/nvme0n1p2   982862268    30015264   902846672   4% /
tmpfs             16313852          28    16313824   1% /dev/shm
tmpfs                 5120           4        5116   1% /run/lock
/dev/nvme0n1p1      523248        6220      517028   2% /boot/efi
array          62265502592 17187086592 45078416000  28% /array
tmpfs              3262768          96     3262672   1% /run/user/1000
brian@icarus:~$ mount | grep array
array on /array type zfs (rw,xattr,noacl,casesensitive)

The zpool ‘array’ is shown as the below. Yes, I’m cheap, I’m using a USB cage. It’s fast enough for serial access IMHO. The write cache is a 1TB NVME which is useful for medium sized transfers.

brian@icarus:~$ zpool status array
  pool: array
 state: ONLINE
  scan: scrub canceled on Sun Mar 10 12:14:41 2024
config:

        NAME                                                 STATE     READ WRITE CKSUM
        array                                                ONLINE       0     0     0
          raidz1-0                                           ONLINE       0     0     0
            usb-JMicron_Generic_DISK00_0123456789ABCDEF-0:0  ONLINE       0     0     0
            usb-JMicron_Generic_DISK01_0123456789ABCDEF-0:1  ONLINE       0     0     0
            sdd1                                             ONLINE       0     0     0
            sde                                              ONLINE       0     0     0
            sdf                                              ONLINE       0     0     0
        cache
          sda                                                ONLINE       0     0     0

errors: No known data errors

I think you’ll agree?

  1. HDDs and ZFS are (painfully) slow because of the CoW of ZFS and HDD latency
  2. SSDs, on the other hand, love ZFS/CoW for wear leveling and zero-latency,

I’ve found that by taking XFS, asserting noatime at mount time, the OS works VERY well when there’s enough RAM because there’s no swapping and the majority of Linux is read-only. My NUC8 workstation is this way (NVMe SSD for the OS and is a screamer)

I have LXD using ZFS-SSD, XFS-SSD, and XFS-HDD. In all three cases, I don’t see any performance hits other than the 250 MB/sec limit of the HDD vs 535 MB/sec limit of the SATA SSD. With those small read/write payloads, it’s not worth the noise.

Here is the machine (which hasn’t been tuned yet) reading the NVMe SSD array from the PCIe 8x carrier card. (QM2-4P-384)

root@apollo:/vol/meta/plex/Plex Media Server/Plug-in Support/Databases# dd if=/dev/md4 of=/dev/null bs=4M count=1000
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 3.01422 s, 1.4 GB/s
root@apollo:/vol/meta/plex/Plex Media Server/Plug-in Support/Databases#  dd if=/dev/nvme0n1 of=/dev/null bs=4M count=2000
2000+0 records in
2000+0 records out
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 3.51343 s, 2.4 GB/s
root@apollo:/vol/meta/plex/Plex Media Server/Plug-in Support/Databases# 

Hey Chuck,

I think as per #1 you’re mostly talking about the write case. Are you talking about the Plex Server and its database and metadata or are you talking about copying (really I move) media to the Media Libraries? In either case this shouldn’t be an issue for me for the Plex Server is running on a a fast NVME and to add things to my media library directories I buy media, rip it it on my laptop with MakeMKV mostly, copy it to my server over NFS to an “Incoming” directory, go through the hullabaloo of naming things right (and no, filebot doesn’t help), and and then share my collection digitally to myself at home or people over my ISP.

From my perspective it is the read case of the media itself that matters most to me and up to today when I read your message I rather assumed that was adequate and rather chalked up UHD 4k problems to transcoding on the fly, and that may indeed be the case - it would be nice if Plex told you whether it ran out of CPU / Memory / or Disc IO rather than just said “the server isn’t powerful enough” … at any rate at the time I chalked all this up to a CPU bottleneck. But today that answer didn’t sit well with me because I recall the Plex server only using 4 cores whereas I have a i7-12650H with a total 10 real cores (per spec) and 16 presented to Linux. So is there some sort of tunable to make it use more cores?

Regarding read performance: I had an impression about the read speeds of the OpenZFS array based on earlier experiences. However, since that time I had a drive failure, replacement, and resilvering and didn’t run any of the tests. Further I note your dd tests are to the raid device, which (so far as I know) in OpenZFS there is no such thing in /dev and that being said doing ‘raw’ raid testing is not the same thing as testing with the filesystem anyway.

So I tried different variations of the dd test with media on my filesystem and was left very surprised for it is simply not as fast or stable as I would like, at all, and it is smelling like I may need to tend to things, and check whether any specific drive in the array is performing well under the others, and if so there may be another drive failure. Fun.

So I used the same dd test with various 4kUHD mkv’s on my filesystem with 4MB ibs and count=1000 and results varied widely from 29MB/s, 59MB/s, 250MB/s, 657MB/s, and with caching effects up to 2.5 GB/s. So no, something is definitely wrong. So I have some troubleshooting to do.

Further, from what I remember reading about OpenZFS raid it reads blocks from the filesystem across multiple discs essentially in parallel. At the same time my friend William says, I have “one of those cheap USB cages” which even though it supports fairly reasonably high bus speeds, and I can see all the drives lighting up at the same time (so there is some degree of parallelism), it’s handled by the older bulk driver rather than UAS so performance is not as good as it could be. So choices choices.

As much fun and interesting as all that is, it seems we’ve rather left the main problem… although perhaps not too far.

Your comment about noatime made me think a little bit. For that option is supported in OpenZFS as well. Further discussions about noatime as regard XFS indicate that writes done out of order can lead to excessive journaling updates and Red Hat even suggests (in older documentation) to disable it altogether unless your use case really needs a journal.

Now, in my case, all my reads and writes (so far as I know) to the array are sequential. Does Plex do any real random access in the media libraries? Also, does Plex autotune the read block size or is that configurable.

At any rate, OpenZFS has many little daemons and processes that run amok like scrub and resilver and so forth. Now, with the previous information that XFS random writes led to inefficient journal use due to atime being updated all over the place leads me to wonder if these OpenZFS daemons might update atime when doing some background task. To answer this I just checked out the OpenZFS source code and will spend some time getting to know it. But I really think you hit a nail on the head with the noatime option… this might be contributing to excessive rescanning. I’ll update you after I’ve read some of the code.

All the best,
Brian

Brian,

I did some digging and a whole lot of testing.

  1. Plex doesn’t trigger on READ events.
    It does trigger on -
    IN_CLOSE_WRITE | IN_CREATE | IN_MODIFY | IN_DELETE | IN_MOVE | IN_ATTRIB

  2. noatime - Turned it on and off again. Could not cause it to raid a Notify event.

  3. Given:
    – two directories in a library section ( A & B )
    – “Partial Scan” enabled.
    – Move or Copy something to B

  4. Actions:
    – PMS sees the B activity
    – Starts a rescan of the library section
    – “A” has no activity – Not scanned
    – “B” was modifed
    – … Those directories under B which were modifed since last scan ARE SCANNED.

==================

Your questions:

  1. We always “think” our media files are sequential & contiguous . HAHAHA
    Yes, they’re n groups but never contiguous unless you know the file size in advance AND pre-allocate the file space as such

  2. Does Plex do “Random” ? Yes – in so much as how you added media to your library is random.

  3. “parallelism” — No. Those are pulse-stretchers on the LEDs so you can see them and the scan rate is so fast that our eyes think they are simultaneously illuminated

  4. Music scanning - Retest – EXT4
    – Created a while new music library section (here’s the last item added)

Mar 25, 2024 12:55:07.500 [140500998904632] DEBUG - Local media already exists (metadata://posters/tv.plex.agents.music_ad7b38de0cce75e35acf11e735208fe2efeeb813)
Mar 25, 2024 12:55:07.500 [140500998904632] DEBUG - Doing expensive tags write for 'Roger Williams' because something changed.
Mar 25, 2024 12:55:07.500 [140500998904632] DEBUG - There was a change for metadata item 32025 (Roger Williams), saving.
Mar 25, 2024 12:55:07.500 [140500998904632] DEBUG - Updating metadata item (save) (Roger Williams) with ID 32025
Mar 25, 2024 12:55:07.501 [140500998904632] DEBUG - Done with metadata update for 32025
Mar 25, 2024 12:55:07.502 [140500998904632] DEBUG - Activity: updated activity 8f01eb02-8610-4c75-a1a6-18bd346997e9 - completed -1.0% - Updating Metadata
Mar 25, 2024 12:55:07.504 [140500998904632] DEBUG - [JobRunner] Job running: FFMPEG_EXTERNAL_LIBS='/share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Library/Plex\ Media\ Server/Codecs/ad47460-4673-linux-x86_64/' X_PLEX_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx "/share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Plex Media Scanner" --match --type 8 --log-file-suffix " Matcher" --item 32026 --match-tag-mode=exclusive
Mar 25, 2024 12:55:07.504 [140500998904632] DEBUG - [JobRunner] Jobs: Starting child process with pid 11458
Mar 25, 2024 12:55:07.506 [140501130181432] DEBUG - Request: [192.168.0.13:37348 (Subnet)] GET /media/providers (8 live) #c383e GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:55:07.507 [140501233367864] DEBUG - Completed: [192.168.0.13:37348] 200 GET /media/providers (8 live) #c383e GZIP 1ms 5896 bytes (pipelined: 1)
Mar 25, 2024 12:55:07.513 [140501118024504] DEBUG - Request: [192.168.0.13:35900 (Subnet)] GET /media/providers (8 live) #c4336 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:55:07.514 [140501231258424] DEBUG - Completed: [192.168.0.13:35900] 200 GET /media/providers (8 live) #c4336 TLS GZIP 1ms 6027 bytes (pipelined: 6)
Mar 25, 2024 12:55:07.575 [140501238258488] DEBUG - Jobs: '/share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Plex Media Scanner' exit code for process 11458 is 0 (success)
Mar 25, 2024 12:55:07.577 [140500998904632] DEBUG - Loaded metadata for Somewhere In Time [MCA] (ID 32026) in 1ms
Mar 25, 2024 12:55:07.577 [140500998904632] DEBUG - Done with metadata update for 32026
Mar 25, 2024 12:55:07.579 [140500998904632] DEBUG - Loaded metadata for Theme From "Somewhere In Time" (ID 32027) in 1ms
Mar 25, 2024 12:55:07.579 [140500998904632] DEBUG - Done with metadata update for 32027

– Now add TV content “The Continental”

Mar 25, 2024 12:58:15.891 [139679858740024] DEBUG - [HttpClient/HCl#89] HTTP/2.0 (0.5s) 200 response from GET https://plex.tv/api/v2/server/users/services?auth_token=xxxxxxxxxxxxxxxxxxxx
Mar 25, 2024 12:58:15.894 [139679872604984] DEBUG - [ViewStateSync] No user tokens found; stopping
Mar 25, 2024 12:58:32.740 [139679800511288] INFO - Library section 2 (Televsion) will be updated because of a change in "/share/glockner/media/tv2/The Continental (2023)"
Mar 25, 2024 12:58:32.741 [139679800511288] DEBUG - [Notify] New directory: "/share/glockner/media/tv2/The Continental (2023)", adding recursively.
Mar 25, 2024 12:58:32.741 [139679800511288] DEBUG - [Notify] Now watching "/share/glockner/media/tv2/The Continental (2023)"
Mar 25, 2024 12:58:32.760 [139679800511288] INFO - Library section 2 (Televsion) will be updated because of a change in "/share/glockner/media/tv2/The Continental (2023)/Season 01"
Mar 25, 2024 12:58:32.760 [139679800511288] DEBUG - [Notify] New directory: "/share/glockner/media/tv2/The Continental (2023)/Season 01", adding recursively.
Mar 25, 2024 12:58:32.760 [139679800511288] DEBUG - [Notify] Now watching "/share/glockner/media/tv2/The Continental (2023)/Season 01"
Mar 25, 2024 12:58:32.770 [139679800511288] INFO - Library section 2 (Televsion) will be updated because of a change in "/share/glockner/media/tv2/The Continental (2023)/Season 01/The Continental (2023) - S01E01 - Night 1 - Brothers in Arms.mkv"
Mar 25, 2024 12:58:37.741 [139679821605688] DEBUG - Library Updater: Requested that section 2 be updated, force=0, deep=0, subdir=/share/glockner/media/tv2 iTunes=0
Mar 25, 2024 12:58:37.741 [139679821605688] DEBUG - Library Updater: Queueing section 2
Mar 25, 2024 12:58:37.742 [139679872604984] DEBUG - Activity: registered new activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - "Scanning"
Mar 25, 2024 12:58:37.742 [139679872604984] DEBUG - Scanning section 2
Mar 25, 2024 12:58:37.742 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 0.0% - Scanning Televsion
Mar 25, 2024 12:58:37.743 [139679872604984] DEBUG - It took 0.000000 ms to retrieve 143 items.
Mar 25, 2024 12:58:37.761 [139679821605688] DEBUG - Library Updater: Requested that section 2 be updated, force=0, deep=0, subdir=/share/glockner/media/tv2/The Continental (2023) iTunes=0
Mar 25, 2024 12:58:37.761 [139679821605688] DEBUG - Library Updater: Queueing section 2
Mar 25, 2024 12:58:37.766 [139679872604984] DEBUG - It took 0.000000 ms to retrieve 143 items.
Mar 25, 2024 12:58:37.816 [139679872604984] DEBUG - Asked to cull library timeline before time 1711384717, we're starting with 0 items.
Mar 25, 2024 12:58:37.816 [139679872604984] DEBUG - Library timeline cull complete with 0 items left.
Mar 25, 2024 12:58:37.816 [139679872604984] DEBUG - Scanning Televsion using en-US(Plex TV Series) with 5182 current media items and 1 section locations in the database.
Mar 25, 2024 12:58:37.816 [139679872604984] DEBUG - Performing a scan with 'Plex TV Series' (language: en-US virtual: 0).
Mar 25, 2024 12:58:37.816 [139679872604984] DEBUG -   * Scanning /share/glockner/media/tv2
Mar 25, 2024 12:58:37.816 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2 (parent: no)
Mar 25, 2024 12:58:37.819 [139679872604984] DEBUG - Skipping over directory '', as nothing has changed; removing 0 media items from map.
Mar 25, 2024 12:58:37.822 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/The Continental (2023) (parent: yes)
Mar 25, 2024 12:58:37.823 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/The Continental (2023)/Season 01 (parent: yes)
Mar 25, 2024 12:58:37.824 [139679872604984] DEBUG - Directory had 1 files, database had 0 files, can't skip.
Mar 25, 2024 12:58:37.824 [139679845722936] DEBUG - Native Scanner: Executed Local Metadata stage in 0.00 sec.
Mar 25, 2024 12:58:37.825 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 1.2% - Scanning Televsion
Mar 25, 2024 12:58:37.825 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/The Alaska Triangle (parent: yes)
Mar 25, 2024 12:58:37.825 [139679845722936] DEBUG - Checking descendants of The Continental (2023)
Mar 25, 2024 12:58:37.825 [139679845722936] DEBUG -  -> Searching down into The Continental (2023)/Season 01
Mar 25, 2024 12:58:37.825 [139679845722936] DEBUG - Checking descendants of The Continental (2023)/Season 01
Mar 25, 2024 12:58:37.825 [139679872604984] DEBUG - Skipping over directory 'The Alaska Triangle', as nothing has changed; removing 0 media items from map.
Mar 25, 2024 12:58:37.825 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/The Alaska Triangle/Season 01 (parent: yes)
Mar 25, 2024 12:58:37.826 [139679872604984] DEBUG - Skipping over directory 'The Alaska Triangle/Season 01', as nothing has changed; removing 10 media items from map.
Mar 25, 2024 12:58:37.826 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 1.8% - Scanning Televsion
Mar 25, 2024 12:58:37.826 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/The Alaska Triangle/Season 02 (parent: yes)
Mar 25, 2024 12:58:37.827 [139679872604984] DEBUG - Skipping over directory 'The Alaska Triangle/Season 02', as nothing has changed; removing 10 media items from map.
Mar 25, 2024 12:58:37.827 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 2.4% - Scanning Televsion
Mar 25, 2024 12:58:37.827 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear (parent: yes)
Mar 25, 2024 12:58:37.828 [139679872604984] DEBUG - Skipping over directory 'Top Gear', as nothing has changed; removing 0 media items from map.
Mar 25, 2024 12:58:37.830 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear/Season 07 (parent: yes)
Mar 25, 2024 12:58:37.831 [139679872604984] DEBUG - Skipping over directory 'Top Gear/Season 07', as nothing has changed; removing 7 media items from map.
Mar 25, 2024 12:58:37.831 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 2.4% - Scanning Televsion
Mar 25, 2024 12:58:37.831 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear/Season 01 (parent: yes)
Mar 25, 2024 12:58:37.831 [139679872604984] DEBUG - Skipping over directory 'Top Gear/Season 01', as nothing has changed; removing 10 media items from map.
Mar 25, 2024 12:58:37.832 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 2.4% - Scanning Televsion
Mar 25, 2024 12:58:37.832 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear/Season 03 (parent: yes)
Mar 25, 2024 12:58:37.832 [139679872604984] DEBUG - Skipping over directory 'Top Gear/Season 03', as nothing has changed; removing 9 media items from map.
Mar 25, 2024 12:58:37.832 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 2.5% - Scanning Televsion
Mar 25, 2024 12:58:37.832 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear/Season 02 (parent: yes)
Mar 25, 2024 12:58:37.833 [139679872604984] DEBUG - Skipping over directory 'Top Gear/Season 02', as nothing has changed; removing 10 media items from map.
Mar 25, 2024 12:58:37.833 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 2.5% - Scanning Televsion
Mar 25, 2024 12:58:37.833 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear/Season 04 (parent: yes)
Mar 25, 2024 12:58:37.834 [139679872604984] DEBUG - Skipping over directory 'Top Gear/Season 04', as nothing has changed; removing 10 media items from map.
Mar 25, 2024 12:58:37.834 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 2.5% - Scanning Televsion
Mar 25, 2024 12:58:37.834 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear/Season 05 (parent: yes)
Mar 25, 2024 12:58:37.835 [139679872604984] DEBUG - Skipping over directory 'Top Gear/Season 05', as nothing has changed; removing 9 media items from map.
Mar 25, 2024 12:58:37.835 [139679872604984] DEBUG - Activity: updated activity 8c2bf391-dcc6-4267-88bc-30b1347e9dc1 - completed 2.6% - Scanning Televsion
Mar 25, 2024 12:58:37.835 [139679872604984] DEBUG - Scanner: Processing directory /share/glockner/media/tv2/Top Gear/Season 13 (parent: yes)

Mar 25, 2024 12:58:37.741 [139679821605688] DEBUG - Library Updater: Requested that section 2 be updated, force=0, deep=0, subdir=/share/glockner/media/tv2 iTunes=0

Finishing the scan

Mar 25, 2024 12:59:00.652 [139679872604984] DEBUG - Collection: Refreshed visibility in section 2 in 0ms.
Mar 25, 2024 12:59:00.652 [139679872604984] DEBUG - Activity: updated activity 879b267d-e029-411d-9ada-e3e61144a1bb - completed 100.0% - Scanning Televsion
Mar 25, 2024 12:59:00.652 [139679872604984] DEBUG - Refreshing section 2 of type: 2
Mar 25, 2024 12:59:00.676 [139679766494008] DEBUG - Request: [192.168.0.13:49946 (Subnet)] GET /media/providers (6 live) #130f GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:00.677 [139679883926328] DEBUG - Completed: [192.168.0.13:49946] 200 GET /media/providers (6 live) #130f GZIP 1ms 5902 bytes (pipelined: 3)
Mar 25, 2024 12:59:00.685 [139679766494008] DEBUG - Request: [192.168.0.13:49930 (Subnet)] GET /media/providers (6 live) #2444 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:00.686 [139679886035768] DEBUG - Completed: [192.168.0.13:49930] 200 GET /media/providers (6 live) #2444 TLS GZIP 1ms 6033 bytes (pipelined: 6)
Mar 25, 2024 12:59:00.691 [139679766494008] DEBUG - Request: [192.168.0.13:59422 (Subnet)] GET /media/providers (6 live) #1313 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:00.692 [139679883926328] DEBUG - Completed: [192.168.0.13:59422] 200 GET /media/providers (6 live) #1313 TLS GZIP 1ms 6033 bytes (pipelined: 3)
Mar 25, 2024 12:59:00.714 [139679808949048] DEBUG - Refreshing 0 IDs.
Mar 25, 2024 12:59:00.714 [139679872604984] DEBUG - Activity: registered new activity bbf3512f-b9e1-4dfa-b7eb-d2f5643f46e0 - "Processing subscriptions"
Mar 25, 2024 12:59:00.714 [139679872604984] DEBUG - Activity: Ended activity 879b267d-e029-411d-9ada-e3e61144a1bb.
Mar 25, 2024 12:59:00.714 [139679806839608] DEBUG - Grabber: Cleaning up orphaned grabs.
Mar 25, 2024 12:59:00.714 [139679806839608] DEBUG - Grabber: Cleaned up 0 decrepit directories in 0.0 sec.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Activity: registered new activity cce4b5a9-c74e-4a12-be18-a3f71dfb018f - "Processing subscriptions"
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Subscription: Scheduling subscriptions.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Subscription: Initialized 0 schedulers.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Subscription: Starting with 0 grabs, scheduled 0 active ones preemptively.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Subscription: Scheduled all subscriptions in 0 ms.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Activity: Ended activity cce4b5a9-c74e-4a12-be18-a3f71dfb018f.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Subscription: There are 0 active grabs at the end.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Activity: Ended activity bbf3512f-b9e1-4dfa-b7eb-d2f5643f46e0.
Mar 25, 2024 12:59:00.715 [139679806839608] DEBUG - Subscription: Refreshed all content in 1 ms.
Mar 25, 2024 12:59:00.727 [139679766494008] DEBUG - Request: [192.168.0.13:49930 (Subnet)] GET /library/sections (6 live) #2460 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:00.727 [139679883926328] DEBUG - Completed: [192.168.0.13:49930] 200 GET /library/sections (6 live) #2460 TLS GZIP 0ms 1069 bytes (pipelined: 7)
Mar 25, 2024 12:59:00.780 [139679766494008] DEBUG - Request: [192.168.0.13:49946 (Subnet)] GET /media/providers (6 live) #245e GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:00.782 [139679886035768] DEBUG - Completed: [192.168.0.13:49946] 200 GET /media/providers (6 live) #245e GZIP 1ms 5896 bytes (pipelined: 4)
Mar 25, 2024 12:59:00.784 [139679766494008] DEBUG - Request: [192.168.0.13:49930 (Subnet)] GET /media/providers (6 live) #247b TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:00.785 [139679883926328] DEBUG - Completed: [192.168.0.13:49930] 200 GET /media/providers (6 live) #247b TLS GZIP 1ms 6027 bytes (pipelined: 8)
Mar 25, 2024 12:59:00.788 [139679766494008] DEBUG - Request: [192.168.0.13:59422 (Subnet)] GET /media/providers (6 live) #2462 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:00.789 [139679886035768] DEBUG - Completed: [192.168.0.13:59422] 200 GET /media/providers (6 live) #2462 TLS GZIP 1ms 6027 bytes (pipelined: 4)
Mar 25, 2024 12:59:20.476 [139679766494008] DEBUG - Request: [192.168.0.13:49930 (Subnet)] GET /:/prefs (6 live) #247f TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:20.479 [139679886035768] DEBUG - Completed: [192.168.0.13:49930] 200 GET /:/prefs (8 live) #247f TLS GZIP 3ms 9137 bytes (pipelined: 9)
Mar 25, 2024 12:59:20.480 [139679766494008] DEBUG - Request: [192.168.0.13:54020 (Subnet)] GET /accounts/1 (8 live) #1291 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:20.480 [139679854521144] DEBUG - Request: [192.168.0.13:54032 (Subnet)] GET /myplex/account (8 live) #2482 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:20.480 [139679886035768] DEBUG - Completed: [192.168.0.13:54020] 200 GET /accounts/1 (8 live) #1291 TLS GZIP 0ms 651 bytes (pipelined: 1)
Mar 25, 2024 12:59:20.480 [139679886035768] DEBUG - Completed: [192.168.0.13:54032] 200 GET /myplex/account (8 live) #2482 TLS GZIP 0ms 4742 bytes (pipelined: 1)
Mar 25, 2024 12:59:22.095 [139679766494008] DEBUG - Request: [192.168.0.13:54038 (Subnet)] GET /diagnostics/logs (7 live) #2483 TLS GZIP Signed-in Token (ChuckPA) (Chrome)
Mar 25, 2024 12:59:22.095 [139679766494008] DEBUG - [Req#2483] Diagnostics: Building logfile zip

No Music was scanned.

At this point:

  1. PMS doesn’t arbitrarily scan Music when updating other content (as demonstrated here)
  2. Occam’s Razor - There’s a symbolic link or typo somewhere
  3. There is a filesystem corruption or implementation bug.
1 Like

Chuck,

I think you misunderstood. I said that this is in the write, not read case, so I write via mv a or multiple files from out of the library into A yet I see it scan my music in B.

Now, along the lines of atime, I’m thinking there may be some process (such as a scrub or resilver) that may update atime, but said this would require more reading of the code etc.

I do note atime on several files in my music directory have a much newer atime than expected. This could, possibly (referring to an older post) be the result of my backup software, but I fail to see how A-> B in this case.

So, I mean, it’s a repeatable condition, and always happens, so who knows. I’ll fiddle around with those inotify flags and will see if I can reproduce it on my array.

Thanks as always,
Brian

You said Writing. I demonstrated what happens when writing.

I COPIED (writing) media into the directory

I’m sorry but I cannot reproduce it.

Our lead developer uses ZFS. If PMS had a problem with ZFS in general, he would know about it.

If QNAP QuTS had a problem with ZFS, We’d hear from everyone using QuTS.

A post was merged into an existing topic: Transfer plex to new synology NAS

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.