Just to confirm, those are the instructions I followed. Just repeated and have the same outcome so attached here are the logs.
(p.s. Couldn’t go to GUI to get logs as the PMS wont start after this process until I do a restore. So have manually collated and zipped. Hope thats the same.)
Just wanted to say you got my database down from 190MB to 130MB. Thanks! I’ll have to check your scripts to see what everything does for my curiosity. Starred your GitHub.
I’m trying a repaired version of the database right now and it seems plex is finally naming and scanning some of the files correctly!
You should find searches now complete almost “as you type” .
That’s a good sign it’s working correctly.
@ChuckPa Did you get a chance to look at those logs Chuck? thanks in advance.
Get ready for a huge performance hit as you load more songs and your DB grows.
I know I’m having a rant but mine now times out in plexamp loading tracks - elan has advised the app is timing out which is true and OK you can drop $$$$ on a server but I still think sql-lite isn’t great for large databases by design.
Also. why is it using such little RAM (Linux) I’m no expert but logically can’t the DB be all in RAM - there has to be a way or was it just not designed for large databases.
I’d consider building a plex server dropping some DDR5 and a few terabytes of RAID 0 NVme’s in it but I’m not so sure it will be lightning fast with a large DB.
I’m curious but what amount of content do you have ?
I’ve …
Movies → 3,770
TV Show → 816 with 34,006 episodes
Music → 727 artist with 6178 albums and 88,458 tracks
… and my music grows by approx 1-2,000 a week as I slowly move stuff into plex. ATM I have zero performance issues.
Really? RAID 0 NVMe ? Minimally - 4.5+ GB/sec isn’t fast enough?
SQLite will handle databases with 15 million records in it without flinching.
Fragmentation, however, is your enemy.
My system is a lowly Xeon E5-2690v4 (per-thread speed isn’t very fast)
using HDD for the database.
One of my tests is to setup this 340,000 episode TV server (Some 980 full TV series)
[chuck@lizum qa.2009]$ find ./340k-all/ -type f -print | wc -l
393970
[chuck@lizum qa.2010]$
There are zero performance issues here.
The only “required” steps after building the library section is to
- Optimize the DB
- Restart Plex (clears the SHM and WAL by committing them to the main.)
For those really battered & fragmented databases which degrade over time,
I have a tool which solves that in very short order.
Is that 2 million?
5 million? I don’t think you’ll find anyone here with that amount of tracks.
Yeah that does seem insane - what are they running an alternative to spotify off their plex
I think spotify has something like 100 million… I just looked, and I thought I had pretty big music library and at 21k something tracks… heheh
Hi!
Fore some reason, when I apply a collection tag to my video files, the request takes forever to complete and I have alerts about that in logs. SQLite file is under 25MB, CPU load is normal, HDD load is normal too.
Any other type of metadata editing works instantly, do you think your tool would help in any way ?
Thanks
Please take all NON-DB FILE-SPECIFIC issues to their own threads.
This thread is not for discussing Music or Collections
It’s solely about the physical DB and its operation.
Please ping from those new threads as needed.
Thanks
On Windows 10 here: If the mentioned script basically defragments the files in the database, I wonder if simply restoring an image backup of its partition wouldn’t work just as well? Is there something more it does (other than the “Optimize” button in PMS? Maybe all I need it the Vacuuming command?
TLDR: I store Plex’s database location in its own partition on a different SSD than Windows, or the media storage drives. I make regular, compressed, image-backups of that partition weekly, with Aomei Backupper. When it restores a partition there is 0% fragmentation because it reformats it and extracts the files contigiously, with no recycle bin. But it does backup the cache folder. My partition is also only 30% full.
The script tool defragments the database in a way which PMS cannot do while it’s running.
The database is exported as text file (in order)
A new database is created and then loaded / imported from that text file.
The Indexes are created after import.
All of this is impossible when PMS is running.
Restoring an already fragmented backup serves no purpose.
Think about it.
- Run the script (Windows version). It does everything needed in one step.
- The Bash script version does not run in Windows
[ChuckPa/PlexDBRepair] Release v1.0.12 - Update HOTIO start/stop commands
Header says v1.0.11
#!/bin/sh
#########################################################################
# Plex Media Server database check and repair utility script. #
# Maintainer: ChuckPa #
# Version: v1.0.11 #
# Date: 15-Aug-2023 #
#########################################################################
# Version for display purposes
Version="v1.0.11"
MOD-EDIT: Formatted text.
![]()
I’ll fix it.
It won’t hurt the operation
EDIT: 19:46 EDT: Fixed
Hey there,
I run PMS on a Synology DS223… default cache is 40MB what should I change to optimize my experience?
TY and Cheers from Germany
Buy a bigger NAS? – joking but also kinda serious for future planning.
(You have an ARMv8 CPU. It’s not a speed demon… It works ok).
Don’t change the cache size. Your machine can’t handle it.
It has 2 GB of RAM by default expandable to 4 GB max. 40 MB of DB cache is plenty. You want the free ram working in other areas.
Thank you for the windows version, searching feels much better now!
Will the new Plex Server (in beta) effect DBRepair.sh