Error starting PMS on QNAP after upgrading QPKG

Server Version#:v.1.41.9.9961-46083195d
Player Version#:N/A

Any help would be appreciated. Did what one would assume was a simple update to a PLEX server today on a QNAP NAS and it wouldn’t restart correctly. This is the error below. Oddly, an identical setup with the same update process succeeded with no issue so it seems localized to the first server and not anything else.

Jul 18, 2025 17:14:32.819 [139691054013240] INFO - Plex Media Server v1.41.9.9961-46083195d - QNAP TS-653D x86_64 - build: linux-x86_64 qnap - GMT -04:00
Jul 18, 2025 17:14:32.819 [139691054013240] INFO - Linux version: QTS 5.2.5.3145, language: en-US
Jul 18, 2025 17:14:32.830 [139691054013240] INFO - Processor: 4-core Intel(R) Celeron(R) J4125 CPU @ 2.00GHz
Jul 18, 2025 17:14:32.831 [139691054013240] INFO - Compiler is - Clang 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
Jul 18, 2025 17:14:32.831 [139691054013240] INFO - /share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Plex Media Server
Jul 18, 2025 17:14:32.828 [139691056601744] INFO - SQLITE3:0x208, 283, recovered 7 frames from WAL file /share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Library/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-wal
Jul 18, 2025 17:14:32.927 [139691056601744] WARN - [CERT/OCSP] getCertInfo failed; skipping stapling
Jul 18, 2025 17:14:32.928 [139691056601744] ERROR - [CERT] PKCS12_parse failed: error:0308010C:digital envelope routines::unsupported
Jul 18, 2025 17:14:32.928 [139691056601744] ERROR - [CERT] Found a user-provided certificate, but couldn't install it.
Jul 18, 2025 17:14:32.929 [139691056601744] INFO - Running migrations. (EPG 0)
Jul 18, 2025 17:14:32.933 [139691056601744] INFO - Running forward migration 202504161541.
Jul 18, 2025 17:14:32.966 [139691056601744] ERROR - SQLITE3:0x208, 1, ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint in "INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0"
Jul 18, 2025 17:14:32.966 [139691056601744] ERROR - Exception inside transaction (inside=1) (/home/runner/_work/plex-media-server/plex-media-server/Library/DatabaseMigrations.cpp:342): sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 18, 2025 17:14:32.972 [139691056601744] ERROR - Exception thrown during migrations, aborting: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0

I use ACME-managed Let’s Encrypt certs (hence the naming you see below)

Don’t know what you created but his is how PMS best supports user certificates.

openssl pkcs12 -export -out MyDomain.p12 \
    -inkey MyDomain-production.key \
    -in MyDomain-production.crt \
	-certfile Acme-LE.crt \
	-password pass:password

Hi Chuck, thank you for the quick reply. You feel this is a SSL certificate issue rather than the SQL issue that is showing below? I only ask, because the other server hasn’t expired certificate as well and the upgrade went okay.

Thanks for any further insight perfect

There are the two issues here but I would like to solve them in order.

Jul 18, 2025 17:14:32.966 [139691056601744] ERROR - SQLITE3:0x208, 1, ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint in “INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0”

This is a DB corruption which we can take care of easily with DBRepair.

We can do this first if you want. Either way it’s simple:

  1. Start DBRepair
  2. Turn on IGNORE
  3. Run AUTO optimize
  4. Start Plex
  5. Confirm it starts
  6. If it doesn’t (which is likely) – We will use REPLACE to pick a working backup and test again (It’s asserting a column or key is missing from a table)

SSL taken care of I believe. Attempted to restart, and still basically same error. Moved onto DBrepair and that output is below as well.

Startup after SSL fix.

Jul 19, 2025 01:06:31.325 [139937282407224] INFO - Plex Media Server v1.41.9.9961-46083195d - QNAP TS-653D x86_64 - build: linux-x86_64 qnap - GMT -04:00
Jul 19, 2025 01:06:31.326 [139937282407224] INFO - Linux version: QTS 5.2.5.3145, language: en-US
Jul 19, 2025 01:06:31.337 [139937282407224] INFO - Processor: 4-core Intel(R) Celeron(R) J4125 CPU @ 2.00GHz
Jul 19, 2025 01:06:31.338 [139937282407224] INFO - Compiler is - Clang 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
Jul 19, 2025 01:06:31.338 [139937282407224] INFO - /share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Plex Media Server
Jul 19, 2025 01:06:31.342 [139937284995728] INFO - SQLITE3:0x208, 283, recovered 7 frames from WAL file /share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Library/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-wal
Jul 19, 2025 01:06:31.446 [139937284995728] WARN - [CERT/OCSP] getCertInfo failed; skipping stapling
Jul 19, 2025 01:06:31.452 [139937284995728] WARN - [CERT/OCSP] getCertInfo failed; skipping stapling
Jul 19, 2025 01:06:31.453 [139937284995728] INFO - Running migrations. (EPG 0)
Jul 19, 2025 01:06:31.462 [139937284995728] INFO - Running forward migration 202504161541.
Jul 19, 2025 01:06:31.498 [139937284995728] ERROR - SQLITE3:0x208, 1, ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint in "INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0"
Jul 19, 2025 01:06:31.498 [139937284995728] ERROR - Exception inside transaction (inside=1) (/home/runner/_work/plex-media-server/plex-media-server/Library/DatabaseMigrations.cpp:342): sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 19, 2025 01:06:31.505 [139937284995728] ERROR - Exception thrown during migrations, aborting: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0

Repair Procedure

Automatic Check,Repair,Index started.

Checking the PMS databases
Check complete.  PMS main database is OK.
Check complete.  PMS blobs database is OK.

Exporting current databases using timestamp: 2025-07-19_01.15.08
Exporting Main DB
Exporting Blobs DB
Successfully exported the main and blobs databases.
Start importing into new databases.
Importing Main DB.
Importing Blobs DB.
Successfully imported databases.
Verifying databases integrity after importing.
Verification complete.  PMS main database is OK.
Verification complete.  PMS blobs database is OK.
Saving current databases with '-BACKUP-2025-07-19_01.15.08'
Making repaired databases active
Repair complete. Please check your library settings and contents for completeness.
Recommend:  Scan Files and Refresh all metadata for each library section.

Backing up of databases
Backup current databases with '-BACKUP-2025-07-19_01.25.08' timestamp.
Reindexing main database
Reindexing main database successful.
Reindexing blobs database
Reindexing blobs database successful.
Reindex complete.
Automatic Check, Repair/optimize, & Index successful.

Startup after Repair (same as before I believe)

Jul 19, 2025 01:31:27.958 [140583786343224] INFO - Plex Media Server v1.41.9.9961-46083195d - QNAP TS-653D x86_64 - build: linux-x86_64 qnap - GMT -04:00
Jul 19, 2025 01:31:27.959 [140583786343224] INFO - Linux version: QTS 5.2.5.3145, language: en-US
Jul 19, 2025 01:31:27.959 [140583786343224] INFO - Processor: 4-core Intel(R) Celeron(R) J4125 CPU @ 2.00GHz
Jul 19, 2025 01:31:27.959 [140583786343224] INFO - Compiler is - Clang 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
Jul 19, 2025 01:31:27.959 [140583786343224] INFO - /share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Plex Media Server
Jul 19, 2025 01:31:28.113 [140583788931728] WARN - [CERT/OCSP] getCertInfo failed; skipping stapling
Jul 19, 2025 01:31:28.126 [140583788931728] WARN - [CERT/OCSP] getCertInfo failed; skipping stapling
Jul 19, 2025 01:31:28.128 [140583788931728] INFO - Running migrations. (EPG 0)
Jul 19, 2025 01:31:28.138 [140583788931728] INFO - Running forward migration 202504161541.
Jul 19, 2025 01:31:28.189 [140583788931728] ERROR - SQLITE3:0x208, 1, ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint in "INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0"
Jul 19, 2025 01:31:28.191 [140583788931728] ERROR - Exception inside transaction (inside=1) (/home/runner/_work/plex-media-server/plex-media-server/Library/DatabaseMigrations.cpp:342): sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 19, 2025 01:31:28.199 [140583788931728] ERROR - Exception thrown during migrations, aborting: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 19, 2025 01:31:28.200 [140583788931728] ERROR - Database corruption: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 19, 2025 01:31:28.200 [140583788931728] ERROR - Error: Unable to set up server: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0 (N4soci10soci_errorE)

Finally, didn’t do the restore. I backup the DB nightly and for some reason the backed up DB also appears corrupted? But the only backups I guess created by PLEX are from March which seems strange that it stopped backing up then apparently?

Checking the PMS databases
Check complete.  PMS main database is OK.
Check complete.  PMS blobs database is OK.
Are you sure you want to restore a previous database backup  (Y/N) ? y
Checking for a usable backup.
Database backups available are:  2022-03-29 2022-03-26 2022-03-23

  1) - 2022-03-29
  2) - 2022-03-26
  3) - 2022-03-23

Select backup date by number or date name  (blank = return to menu)

Thank you for this.

  1. You confirmed the error is with the DB contents and not the SQLite DB itself.
  2. You have 3 backup files available for DBRepair to use immediately.

The biggest problem is that you’ve not had a DB backup since March 2022 due to the internal errors (as confirmed by DBRepair)

If we were to rebuild PMS from the beginning, pulling your watch history from this current DB, would this be ok with you ?

( This would give you a known-good DB. Importing your watch history, either from the cloud or the DB, would be the last step. )

Thank you for the reply.

I think the only logical step is to rebuild to a known good DB state. I have no problem rebuilding the libraries or the associated metadata, but definitely watch history, users/access setup, and server configuration is something that is important to me.

I would be forever grateful if we were able to recreate from scratch and then import all of the important and necessary data. As noted, rebuilding metadata or libraries is a no-brainer.

Any advice would be appreciated.

We know the DB’s which DBRepair has available are good.

In worst-case scenario , we can extract watch history from the current (damaged) DB. We need to save it some place before installing the backup.

Does this seem logical to you?

  1. With PMS stopped
  2. Make a full backup , as is, of the Databases directoryr somewhere safe on the QNAP
  3. Now start DBRepair
  4. Use the REPLACE option and select the 2022-03-29 DB to restore.
  5. When it’s completed, START PMS
  6. Stay in the script . (it can safely be idle while PMS is running)
  7. As PMS starts, it will have a lot of DB work to do (2022 → 2025 DB updates)
    – This will take a lot of time.
    – Using the QNAP Dashboard, Watch the CPU load.
  8. When the CPU loading has returned to normal ‘idle’ levels (when PMS is running) , Open PMS in Plex/web browser
  9. Survery the status
    – Check the library sections which are present versus what you know should be there
    – Start recreating if/as needed.
  10. As you re-add media (scan files) / re-create sections, if the cloud watch status is working as it should (you had it enabled), Your watch history will catch up and mark all media appropriately.

When you’re satisfied the rebuild/update is now going in the right direction,
you can exit DBRepair (clean up the tmp files)

We will keep the temporary backup we just made until all your watch history has been restored.

Chuck, thanks for the quick replies. I’m happy to undertake this action, but do have a couple of clarifying questions first.

  1. You mention “In worst-case scenario , we can extract watch history from the current (damaged) DB.” Is that being done in this scenario or pulling it from the cloud, and there is a separate “oh no, this process didn’t work” step we take next?
  2. I assume there is a way to manually edit this DB and get it working, but I also assume that is a HUGE and risky undertaking? It seems to my simple and uneducated mind that this DB was working prior to the upgrade (and I have the DB backups from the night before the upgrade) so it does seem strange to me that it can go so wrong.

Derek

Derek,

You’re right, I should have been more clear.

  1. In normal operation , where the cloud backs up all the watch history, if you need to recreate the server instance, it will download that watch history from the backup in the cloud.

  2. Not knowing if you have cloud backup enabled (some folks disable it), we would use the old DB (this is the worst-case scenario) and extract data from watch list table stored in the DB. ( History will be in the DB or in the Cloud )

Manually editing the DB, while possible, is a monumental task, requiring you have intimate knowledge of what PMS puts in each record and the relationships with the other table(s).

The biggest problem with manually editing it is knowing the exact record(s) which are causing the problem.

PMS doesn’t readily tell us there’s a problem unless we’re lucky enough to have caught it in the log files as it occurred but before they rolled off the end of the save queue.

Okay thank you for all the information. I will undertake the 2022 restoration, and then hopefully there’s enough configuration there to fast forward all of the watch history once I ensure the libraries are set up. It is a fairly simple Library setup, there’s only TV shows and movies as two separate libraries.

I am traveling today, so I won’t have time unfortunately to kick this off, but I will endeavor to get it done as possible and report back with a success or failure.

Hi Chuck,

Back again – long drive for me however done safe and sound. But…I remembered on the hours of driving why I don’t have a more recent backup of the DB – I (smartly?) moved the DB backup process to a different filesystem so if the installation volume got compromised or otherwise destroyed, I’d have the DB backup. Of which I have 4 now listed below that are from the last couple of weeks fortunately.

On that note, I now have the following when I try to restore:

Checking the PMS databases
Check complete. PMS main database is OK.
Check complete. PMS blobs database is OK.
Are you sure you want to restore a previous database backup (Y/N) ? y
Checking for a usable backup.
Database backups available are: 2025-07-17 2025-07-14 2025-07-11 2025-07-08

    • 2025-07-17
    • 2025-07-14
    • 2025-07-11
    • 2025-07-08

Select backup date by number or date name (blank = return to menu) 2
Checking backup candidate 2025-07-17
Database backup 2025-07-17 is valid.

Use backup dated: ‘2025-07-17’ ? (Y/N) ? y
Saving current databases with timestamp: ‘-BACKUP-2025-07-22_15.09.07’
Copying backup database from 2025-07-17 to use as new database.
Copy complete. Performing final check

Database recovery and verification complete.

Sadly, the restoration completion and starting of PMS the results are negative with the ONCONFLICT error from SQLite continuing to occur. See logs below.

Jul 22, 2025 15:15:33.252 [139877322099512] INFO - Plex Media Server v1.41.9.9961-46083195d - QNAP TS-653D x86_64 - build: linux-x86_64 qnap - GMT -04:00
Jul 22, 2025 15:15:33.252 [139877322099512] INFO - Linux version: QTS 5.2.5.3145, language: en-US
Jul 22, 2025 15:15:33.264 [139877322099512] INFO - Processor: 4-core Intel(R) Celeron(R) J4125 CPU @ 2.00GHz
Jul 22, 2025 15:15:33.264 [139877322099512] INFO - Compiler is - Clang 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
Jul 22, 2025 15:15:33.264 [139877322099512] INFO - /share/CACHEDEV1_DATA/.qpkg/PlexMediaServer/Plex Media Server
Jul 22, 2025 15:15:33.355 [139877324688016] WARN - [CERT/OCSP] getCertInfo failed; skipping stapling
Jul 22, 2025 15:15:33.362 [139877324688016] WARN - [CERT/OCSP] getCertInfo failed; skipping stapling
Jul 22, 2025 15:15:33.362 [139877324688016] INFO - Running migrations. (EPG 0)
Jul 22, 2025 15:15:33.367 [139877324688016] INFO - Running forward migration 202504160804.
Jul 22, 2025 15:15:33.387 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.390 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.393 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.396 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.400 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.403 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.406 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.409 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.411 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.417 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.420 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.423 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.427 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.438 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.446 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.449 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.452 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.455 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.458 [139877324688016] INFO - SQLITE3:0x208, 17, statement aborts at 60: [select * from metadata_items limit 1] database schema has changed
Jul 22, 2025 15:15:33.461 [139877324688016] INFO - Completed forward migration 202504160804.
Jul 22, 2025 15:15:33.461 [139877324688016] INFO - Running forward migration 202504161541.
Jul 22, 2025 15:15:33.503 [139877324688016] ERROR - SQLITE3:0x208, 1, ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint in "INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0"
Jul 22, 2025 15:15:33.504 [139877324688016] ERROR - Exception inside transaction (inside=1) (/home/runner/_work/plex-media-server/plex-media-server/Library/DatabaseMigrations.cpp:342): sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 22, 2025 15:15:33.510 [139877324688016] ERROR - Exception thrown during migrations, aborting: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 22, 2025 15:15:33.510 [139877324688016] ERROR - Database corruption: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0
Jul 22, 2025 15:15:33.510 [139877324688016] ERROR - Error: Unable to set up server: sqlite3_statement_backend::prepare: ON CONFLICT clause does not match any PRIMARY KEY or UNIQUE constraint for SQL: INSERT INTO schema_migrations (version, rollback_sql, optimize_on_rollback, min_version) VALUES (:version, :sql, 0, :minVersion) ON CONFLICT(version) DO UPDATE SET rollback_sql=:sql, optimize_on_rollback=0 (N4soci10soci_errorE)

NONE of this make sense since the database and service were running fine until the 18th when the upgrade to the software occurred.

Is there possibly something else at play here that I’m/we’re overlooking?

More updates. So, I have now restored the latest backup I have (2025-07-17) and it failed with the installed version (latest production). I thought I’d try to roll back the software as a next step with the 2025-07-17 DB version (which is what it was running before all this went down). I found this version in my archives as I believe the last installed PlexMediaServer-1.41.7.9823-59f304c16-x86_64.qpkg.

Reinstalling this has now allowed the server to start with the following log file. I have all the history and libraries up, and all users appear to be as expected. The server load is minimal now so it must be done all its catching up.

I’m wondering, where do we go from here?

Please give me the ZIP file. Posting this long blob isn’t sufficient and impossible to search through.

If you have it working as of 1.41.7, this gives us something to go on.

Sorry, realized afterwards attaching the raw was useless. Zip attached.

Plex Media Server.log.zip (2.9 KB)

Attaching the singular .log file also mostly useless.

Further, DEBUG logging is not enabled. I can’t see what it’s doing.

Please enable DEBUG logging (SAVE)

restart PMS

Let’s see where we’re at.

Make a ZIP (or tar.gz) of the Logs directory . Attach it here please

Ok, apologies again. Attached now are the logs after enabling debug and restarting PMS and waiting a few minutes.

PlexLog.zip (462.2 KB)

@madhitz

  1. The server is up.

  2. It’s claimed.

  3. It’s at LAN IP 10.100.36.5

  4. You have IPv6 enabled (Settings -server - Network) but have a V4 LAN.
    You can turn off IPV6 support to make things smoother.

  5. You have VERBOSE logging enabled (Please turn off) and restart PMS afterwards

  6. I see user alikrok signing into it.

  7. Looks like some container apps are trying to connect but the certificate is not matching.

  8. It’s looking for plex.wad****** (obfuscated) and not finding

Everything else looking good.
No glaring errors.

Is there anything it’s not doing that it should ?

Hi Chuck,

The thing it’s not doing that it should be, is when I do a software update to the latest released version, the above errors begin to occur. So everything works fine now, install the new software and it will not start.

Because 1.41.7 has known issues (DB bloat), I recommend backing down to 1.41.5 or 1.41.6. I will provide you QPKG if needed.

I’ve asked engineering for assistance on what this is.

EDIT: Which version are you upgrading from?