Server Version#: Version 1.18.4.2171
Player Version#: Version 1.4.1.940-574c2fa7
Coming from this Thread and following the steps, @ChuckPa proposed:
As The Root Music Folder of the NAS is very very large, I will do something like symlinking more and more main-folders over the time and not all at once, as I did the last times. Maybe we can find out, when the DB gets corrupted with diffs from the DB backups… who knows…
I’ll need to think about a new concept with a new shared folder then, which I’ll use for this “project”… That’ll take some time, because other services/programs use that shared folder too, so I cannot change its contents easily…
Not using Symlinks atm, it was just an idea becaue I cannot move/copy content easily.
I understand. NAS storage architecture changes are never easy.
With Plex, the biggest thing to keep in your mind during design is the kernel’s Notify service. I-node notification is used heavily in Plex. Media changes are detected this way. The transcoder uses it for certain codecs.
inotify worked very well until now, but lately I had set up Plex to not scan my library automaticly or when changes occured (but I noticed, that Plex did it anyways) because it was not needed to be up to date.
Ok, PMS is indexing right now, I’ve added A,B,C,D and it surely will take until tomorrow, when I add the next ones.
Meanwhile, I have a question, regarding the procedure, we’re doing:
How can we find the culprit/error/bug that way?
Are you guessing/hoping the error (corrupt db) will not show up again, when files are added slowly and in single steps? Or what is the path we’re following? I’m trying to learn.
There are a couple ways the DB could have gone sour.
If the problem is quantity or size (list length / memory allocation maximum) being exceeded. This will show itself when “X” items have been loaded regardless how they are loaded.
If the issue is protracted, uninterrupted, additions to the database, adding one block at a time will sidestep the issue.
There is still the issue of database integrity during this. I failed to mention “Optimize Database” as a step to perform between adding each section. While it should not impact the results, it will impact (improve) the performance. I am reticent to perform this step not knowing the total number of items to be added to the table.
Various artists are double (and I guess, other artists have doublettes too, because Plex doubled a lot of artists the previous attempts when indexing the music)
Btw: speaking of optimizing the DB, I did a count of my root music folder: 739434 files and 92078 folders.
Optimizing took 1-2 hours and I tried it a lot the last months, but it never did something helpful, like speeding up access to the db or magicially fix things (which I did not expect), it was just a try to see what happens after I clicked the button.
The only thing which happened: Plex settings were not reachable for 1-2 hours of optimizing, I had opened another ticked on that subject, but no solution so far…)
What next?
DB size is small, so are the logs. should I send them?
Before proceeding further, allow me to check with Engineering.
The size of the WAL (pending transaction) file is alarming to me.
It should be near zero.
Given the gross misclassification of the albums, there must be something else going on.
my WAL file is almost always increasing to the same size (or there about) as the db file, until pms is restarted, at which point the WAL will shrink, then expand over time, until the next restart.
Correct, at PMS start, the WAL should be at or near zero.
From the SQLite3 documentation:
Write-Ahead Log ( WAL ) Files . A write-ahead log or WAL file is used in place of a rollback journal when SQLite is operating in WAL mode. As with the rollback journal, the purpose of the WAL file is to implement atomic commit and rollback.
and
The WAL file is part of the persistent state of the database and should be kept with the database if the database is copied or moved. If a database file is separated from its WAL file , then transactions that were previously committed to the database might be lost, or the database file might become corrupted.