Server Version#: 1.19.4.2902
Player Version#: 4.30.2
Enable Intro detecttion: off
Enable video preview thumbnails: off
Save progress of songs: off
Tag photos: off
Number of libraries: 34
Running Windows 10 on NVMe drive
PMS on the same NVMe drive
i7 CPU with 16GB of RAM on dedicated machine (not VM)
My Plex com.plexapp.plugins.library.db is over 2GB. I have over a million files (music, movies, TV, photos, videos) and I’m only have 30% my total in Plex. It is taking over 15 seconds to load each library. Did I hit the limitation in Plex or is what I’m doing wrong? I don’t think I can import the remaining 70% of my collection. Any way to move the Plex DB to MariaDB, MSSQL or any other database with better performance?
Database is optimized after each scan. I also clean bundles and empty trash after each scan of each library. I do this manually. I also turned off automatic scan because the library is so big and the performance is so bad that no client can connect to the server when scanning/optimizing/cleaning bundles. I also have scan priority set to low.
Thanks for the reply
It is running on Windows. I will considering moving to Linux if performance is better there. Windows runs on NVMe which is where PMS also sits. 180+GB free space left on the NVMe drive.
If you’re already on NVMe, and the CPU is hefty enough, then Windows I/O is likely holding you back.
Linux isn’t for the timid so be advised.
We are here to help with Linux but we aren’t equipped to teach Linux so be prepared to master Linux basics using Google & a play VM as your first step.
I suspect your CPU or Windows, or the pair, is the holdup.
I can take SQLite3, on Linux, and pull 15 million record queries in 1-2 seconds with an i7.
Switching to another DB engine doesn’t help when reading.
Very interesting on the Linux performance. I have plenty of Linux boxes here so there’s no challenge in setting it up. Just want to see if there is significant gain in moving to Linux (Ubuntu to be specific). Windows is currently on an i7 with 16GB of RAM dedicated machine. On the DB side, I’m assuming table indexing is done properly? Can we peak at the table structure through a client?
If I switch to Linux, it will be in a VM on an ESXi server that has dual Xeon’s and 96GB of RAM. Any recommended resource allocation in terms of CPUs and memory?
Or the better question, does the size of the DB look out of place or is that still within what Plex can handle?
Hey,
So I decided to move from Windows to Linux. Followed a guide here on what to copy over and what to set up. I installed Plex on Ubuntu, mapped all my shares so Plex can see them (no permission issues). I copied over Metadata, Media, Plug-ins Support, Plug-ins. I am able to start Plex, I can see all the migrated libraries. I picked a small library and added the correct linux path to the library without deleting the old Windows UNC paths. I am able to save and all. However, when I go to scan, it just goes from Scanning to Scanning complete real quick without actually doing the scan. It is not detecting the path that I added and scanning it. Am I missing a step somewhere?
Should have looked at the log. It was due to a scanner that doesn’t exist. Changed to the standard scanner and all is scanning again. Now will see if it actually keeps my custom data that I entered along with the artwork for titles that were missing them from the default scan.
Update: So it is scanning. However, it is not retaining my artwork. I have to manually go into each one and change the artwork. I don’t have to re-download the artwork, I just have to select it. Is this normal behavior in this situation?
Reporting back. Went through the migration fairly painless after sorting out the custom scanner issue that I had. Everything is working, didn’t lose my ark work for most. Did lose some for others but only a small percentage. And they were lost because of naming convention of episodes that Plex did not pick up, so there is nothing wrong with the migration process, but more of bad naming convention. Thought this might be useful in case someone is or will be doing the migration.
did switching to linux actually solve your problem and/or improve performance?
my db is ~1.4gig and my entire plex data volume is /dev/sdz btrfs 932G 680G 252G 74% /dataplex
I keep all my plex data on a dedicated 1tb ssd (not nvme), nothing other than the plex support hierarchy.
Obviously the bulk of the metadata is chapter/video preview thumbs.
But in any case, as your db scales larger, you need extremely low latency storage, and as much single thread cpu power as you can get. (while sqlite does have some multi-threading, sooner or later everything comes down to IOPS, both storage and cpu).
another thing to consider, when scaling out, is splitting into multiple servers, ie a music server, tv server, movie server.
this would allow you to manage resources from each independently according to load/need, and since each would be independent, load/rescans/etc against one server/db, would not affect the other server(s).
and with the plex uno client interface, having multiple servers is ‘no big deal’ since you can simply pin the libraries you want even if they are different servers or even locations.
I am not sure if my performance problem is fixed or not. What I did was partially rebuild Plex as a new instance on a different server to see where the bulk of the media/metadata folders are coming from. I did all my videos files, then migrated to Ubuntu (headless). Media/Metadata was around 60gigs without video preview or intro detection, with database file at 360+ megs. With that size, there is no performance issue. I copied that instance over to Ubuntu and started to index music. It is now at 90gigs for media/metadata with the db file at 760+ megs. I have scanned about 250,000 tracks so far, excluding all video/photo files. The apple to apple comparison will be when I hit 2gb on the DB file, then I can see how it performs as well as how big media/metadata directories are. I’ll keep reporting back with progress.