Trying to find the cause, why PMS is corrupting its own database after a while

Tekno! :stuck_out_tongue:

stop messing with my head please? It’s only Monday!!! :rofl:

Let’s wait for the user?

No worries, I thought it was you messing with my head. :innocent:

Speaking of WAL files…

In my previous scenarios,
the size of the DB growed up to 3.6GB, where the WAL files have had always similar sizes.
They (WAL) mostly never get smaller (on synology only, on windows they shrink from time to time and start growing again, on QNAP I never checked). Optimizing the DB did not help, nor a restart of PMS.

Remember the first chat we had? the logs I sent you, contained a lot of messages telling, that PMS could not write or read to/from the DB, because it took too long. I’m sure, the log contains those messages this time too.

I’m lucky, we’re now at the right path :slight_smile:

Since my earlier post, I’ve chatted with the actual DB developer.

This size is considered excessive. That notwithstanding, we might be able to get to a working level.

Do you have the ability to add a SSD to the Synology and make it a real volume ?

@FlexiPlexi

Do you still have the log ZIP available when PMS crashed and indicated the corrupt database? I’ve been asked by engineering to provide one,

As followup, after an detailed discussion with engineering, I’ve learned how we can better diagnose and potentially mitigate what’s happening.

How long, after installing an update, do you wait before accessing Plex? (rough estimate)

no.

sure, I have not altered anything. I followed your steps (see post#1) precisely.

If needed, I can send you a dropbox-link containing the database and/or the log or anything else you need.

see step 5. about 1 minute or so.

so lets do it!

P.S.
There is a new PMS version out, shoult I update?

  1. Backtracking through the posts, I found dump files but no actual logs files. This is why I had asked for the actual logs. I was chatting with the chief about this today. He had asked for the exact error texts.

  2. What we are starting to suspect, based on the error you and I got by manually attempting a scan, is a database upgrade (a schema migration) didn’t complete before PMS became active. Working hypothesis is, until we get the hard failures from the PMS logs, is:
    a. Look further up in this thread, you see discussion about the WAL and SHM.
    The WAL (Write Ahead Log) is both a journal and a cache. For the DB to be ā€˜whole’ it needs to be fully inserted (committed) to the main DB itself when PMS shuts down.
    b. Due to the size of the WAL (number of items being indexed), current scripting isn’t allowing enough time for the WAL to be ingested into the DB before process termination.
    c. Terminating the ingestion, mid ā€œVACUUMā€ (the SQLite3 command being performed), leaves the data in an unknown state meaning your media info is scrambled.

  3. If the above is correct, I can easily address that. I can (and have already written an update to the Synology scripts) to be more graceful in how it shuts down PMS.

  4. Given the number of records involved, and the CPU power of the Syno, it is very possible, you’re asking more than the box can deliver. over 700,000 files is a LOT of data. Frankly, Engineering never considered a library of that size. Yes, SQLite3 can do it but on a Synology? Maybe not an Atom / Celeron-based one.

If you’d like to help me conduct a test, before things get too far out of control, I’d like to have PMS brought down VERY gracefully by hand and then inspect what happens to the WAL file. I will give you the instructions how to do this. We won’t be using the GUI to do it.

I sent you logs via PM. (EDIT: no, I haven’t, because you are not accepting PM atm… And I will NOT send any logs via forum…)

your suggestions fit to the messages I have seen in the logs as well and way I saw PMS handled the WAL files. As I said earlier, the WAL files keep getting bigger and bigger but seldom get written to the db.
If the DB is small in size (means only a 1000 entries/rows) then the WAL is written to the DB when I shutted down PMS. If the size continues to grow with my music folders, PMS had more and more problems keeping up to commit the data to the DB. That what I have clearly seen in the logs and in real life.

I also saw messages that the sql structure was changed, which seemed odd to me, because that should be done only at the first init (after a fresh start from scratch), but those messages came somewhere in the middle of the scanning process.

About the number of records, you mention:
its the year 2020 and Plex should consider users which collecting music/movies since the beginning of dawn :slight_smile:
We are collecting music since many years now, and with modern providers/sellers and internet speed its is easy to buy and save a lot of music with some clicks.
We are apple/beatport/amazon/bandcamp/… (just to name the big ones) customers since so many years now…just a click and you have 15 files more on your NAS with 40 or even more terrabytes. Thats not unusual nowadays. And it should be addressed by the Plex team, not only streaming media.

Of course, thats why I’m here :slight_smile:

Let’s first have some fun with the academic thought discussion?
Next, let’s get to the heart of the matter and why we’re doing this.

Fun discussion:

  1. Using your song count (which is like downloading most of a streaming service provider for local use) :), at a conservative 3 minutes per song, playing back continuously on a 24/7/365.25 basis, it will be 4.56 years before there is a single repeat

  2. Enterprise storage uses 64 TB SSD SAS modules in multi-Petabyte arrays versus spinning metal HDDs; and multi-Intel Gold series Enterprise CPUs ($18,000 / CPU) versus a single Intel Atom C2538 ($118 / CPU); 1 TB of RAM versus 16 GB per CPU;

  3. Enterprise applications use Oracle (or similar) proper relational databases where each table exists as a file versus SQLite3 which is a single database file architecture with multiple tables in that single file; Where database record counts in the billions / trillions is a normal everyday fact versus the typical Plex user who has less than 50TB of total storage; Where most of that storage is used for video versus audio.

Serious discussion:

I would like you to do the following (knowing Plex is running)

  1. Open a SSH session to the Syno (Control Panel - Terminal & SNMP - enable SSH)
  2. Using Putty (Windows client) or SSH (Mac / Linux), sign in
  3. Elevate to ā€œrootā€ privilege level: sudo -su root and supply password
  4. Get the process number of Plex
sh-4.3# ps -ef | grep -i Plex
  1. Look for ā€œPlex Media Serverā€
  2. The first number (column 2), which is the process number, in that line which lists it (/var/packages/…/Plex Med…) is the one we want.
  3. kill -15 insert_process_number_here
  4. Now we monitor and wait for PMS to shutdown by executing ps -ef | grep -i plex until all is gone except the ā€˜grep’)

Plex has now been gracefully shutdown, we can get to work.

sh-4.3# cd "/volume1/Plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases"
sh-4.3# ls -la

Ignoring the DLNA db, if the WAL/ SHM still exists (I expect they will), let’s vacuum them back into the DB. If it operates as expected, when complete (which will likely take a bit), the WAL & SHM will be near zero or gone. I would have tried on yours but they weren’t in the file.

sh-4.3# sqlite3 com.plexapp.plugins.library.db vacuum

If they still exist at this point, would it be too much inconvenience to ask for a ZIP of the com.plexapp.plugins.library.db and its WAL & SHM ?

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.