That’s not a dumb question at all.
When moving from same-OS machines, the process is trivial.
When moving from similar-OS (like Linux vs MacOS), the process is a little more involved but still doable.
However, when moving from Windows <-> Linux, the whole world changes
- Just like the first two cases, pathnames change. That’s involved but trivial.
- In the metadata, is subject to change (text files are written differently in windows)
May I ask what’s the underlying reason for the request ?
If it’s to preserve your watch history, we have multiple solutions for that (Plex does as well as I do)
To check & confirm you do or don’t have shell script permissions (which seems REALLY ODD to be denied that on a Linux host)
cd
echo date >script-check.sh
chmod +x script-check.sh
./script-check.sh
Each time you repeat ./script-check.sh at the command line, it’ll print current date and time.
Thanks for the reply.
Blockquote May I ask what’s the underlying reason for the request ?
The reason for the question was I am noticing lengthy delays on both the Linux servers when it comes to doing searches etc and am suspecting they need some repairs and cleaning done. After having successfully used your tool on my Windows server and seen the improvement the thought occurred to me to use the Windows based tool to maybe inspect/fix the Linux based db’s.
Blockquote In the metadata, is subject to change (text files are written differently in windows)
That was my major concern.
I’ll try your suggested check to see if I have shell script permissions tomorrow to see if maybe my question was jumping the gun so to speak.
Jumping ahead if I may?
If you can run the script I wrote for you above, then you can use my tool to optimize the database
Regarding “Run options: 1 - 4 - 3”
If 1 returns OK, is 4 necessary and can 3 just be run?
And if 4 is run, is a re-indexing done with the repair? (thereby no need to run 3)
1 = Report the status of the DB (optional)
4 = Rebuild tables
3 = Rebuild indexes ( Option 4 drops the Indexes as they are no longer valid after table rebuild
Just run the whole sequence:
./DBRepair.sh 1 4 3 9
Sit back and watch
Thanks.
What’s “Vacuum”?
Kinda like a defragmentation for hard drives – just for database files.
Actually it’s not exactly the same, but the method and the purpose is nearly the same.
In the tool,
It removes empty space only
Option 4 is a full reorganization (put in proper order and remove empty space)
Ah OK… Yes, searches are now instantaneous!
PMS is on a dual 6 core Xeon HP ProLiant server with 64 GB of RAM, RAID1 SSD for OS and PMS and the content is on a pair of RAID10 arrays, so I guess it wasn’t particularly slow to begin with ![]()
Thanks Chuck, ran this in my Docker plex and it made a huge difference to browsing and search.
Your instructions were easy to follow as well - Thanks!
Thank you for this !
Could you give a link to the download place of the tool for Windows ?
I’m sorry but i don’t see it…
Thank you really much !
I’l gong to test it this week ![]()
Also, where do we put the bat file ?
I think it can go anywhere, but for safety’s sake I put it in with the database files.
Morning all.
Yes, either the BAT or the SH file can be placed anywhere on the machine.
Internally, it knows where to look and uses full paths for everything EXCEPT “Plex SQLite” (on windows)
On Windows, it relies on PATH to resolve where “Plex SQLite” is.
Can you maybe help with some instructions for us seedbox guys…?
Cant get it to run, have tried both filling out the Docker and Native Linux settings.
Any guidance maybe…?
I wonder if you could package a tool within the plex package that natively did this. The script works great - but it does seem like something that would be incredibly useful to have inbuilt.
or maybe as a specialized option that you can run that, kind of puts plex into maintenance mode to run the task. something you maybe do 6-12 months if you have alot of database changes.
anyways - my cache size is 1gb (64gb system on zfs - plex instead on ssd - media on hdds) and did it make a noticeable performance difference on playlist and filters.
i did try larger sizes, didnt seem to do much after a 1gb - so i guess anywhere betweem 256mb to 1gb is good.
Whatcha Got ?
cat /proc/1/comm- Which native package did you install?
- As for running docker containers, you run it from INSIDE the
docker execenvironment because it’s already setup
Did you see the README.md file ?