my database/library is so large that i have to store it in ram. and when backing the database dir up before every reboot, i noticed there was a second library database file with the extension db-wal of almost equal size to the usual db file (2.9gbs to my regular 3.1gb db file)
i’m thinking one of the newer betas needs to migrate the old database to a new database?
and it never occured to me that there would come a time when unnanounced plex would need double the hard drive space for the databases instantly… and my ramdisk didnt have enough space for that… so i’m guessing it freaked out and the db-wal that was stuck at 2.9 gbs was unable to finish the migration?
i shut down plex, tripled my ramdisk size, rebooted and restored a working backup from before plex added the extra 2.9gb db-wal file thinking plex would now have the space to do whatever weird migration it needs to do… it did recreate the file… cept weirdly the regular old db is 2.9gb now and the db-wal library database is 3.1gb ahaha…
how long will this migration take?
will plex eventually delete the db-wal file when its done? (seems stuck at the same size now, doesnt seem to be doing anything)
unsure if i am permanently gonna need double the database size in my ramdisk for future ‘migrations’ and again will plex delete the duplicate database??
Ubuntu Linux 18.04.6
Intel(R) Core™ i7-8700 CPU @ 3.20GHz, 12 cores
64gb ram
running the very latest plex beta, everything but the database dir is in the standard default locations.
and yea no problem with the command line : )
as for running it in ram… thats just a matter of me being an outlier with a massive library plex never planned on anyone having.
ps - i have another thread on here about a huge search problem that someone from plex said is a known issue with a library as large as mine i guess… hoping for a fix someday… but basically with a library this large you can’t use common words in a search or plex like times out and says there are no results…
ie- if i search for “gone with the wind” it comes back nada…
if i search “gone wind” it returns the results : ) : (
Get the script, untar it. run from wherever it is. It’ll find the databases
Options: 1 - 4 - 3
– Check
– Repair (which is a full export-import operation for max clean and compaction
– Reindex (rebuild all search indexes for PMS)
Now start PMS and check out how the GUI and searches are.
ok… i finally was able to do the three steps you suggested above @ChuckPa : )
and everything seemed to complete fine… the database decreased in size by like 300mbs
and the interface/libraries do seem MUCH snappier (in terms of loading on the page).
but alas it didn’t fix the common word search issue (see above, as i wasn’t sure you were suggesting/saying these 3 actions would fix that or if they would just get rid of the second copy of the database that was left behind)
weirdly that second copy got deleted on its on at some point, maybe with a server update that issue was fixed?
thanks for the tips/help… the database/libraries are definitly snappier now… alas the common word in a search problem still exists… again some one from the team here mentioned in another thread that this was a known issue with large libraries?
it’s definitly frustrating as kids/family members just don’t seem to understand why they can’t search for “gone with the wind” or “the rookie” (when you do, plex like times out or something and says there are no results) but if you leave out the common words (the, and, of etc) and search say “gone wind” or “rookie” you get the proper hits… : ) : (
Is this something I could do on a Windows Server? I’m having similar issues, databases are a bit smaller than OP’s but around 2-2.5GB.
I actually started caching the entire DB drive in PrimoCache and then moved entirely to storing it in RAM myself because I was having so many issues - it helped a little bit, but I still constantly get query errors (slow query, waited too long, etc) and I don’t want to have to keep dedicating more RAM to this if I don’t need to.
can I just run those scripts in powershell or WSL or something?
If you go to GitHub and get the Windows version, it’ll run a preprogrammed sequence.
It’s named with ‘dev’ in it because it’s not as full up as I would like.
(I’m not a windows developer).
My main tool won’t run in Powershell. I have no WIndows detection in the Linux-based code.
Make certain PMS is stopped (of course) before running it.
It will detect PMS running and abort but that’s all it can do.
Just to be safe, make a backup of the “Databases” directory (just in case).
There haven’t been reports of failures once we got it developed but there’s always a chance.
Once you see the WAL get folded back into the main DB, you won’t need a ramdisk.
I’ll give it a try and let you know how it goes, thanks! I’ve been looking through the database and see a ton (at least hundreds) of effectively unused rows. I’m wondering if keeping the same database for years and years through all these changes may have resulted in me needing to just wipe it and start new, but I have so much custom metadata that I can’t bring myself to.
I’m hopeful this helps; if it really makes it that much snappier you’ve solved all my problems. Will get back to you!