How to repair main DB UNIQUE constraint failed that prevents recent PMS (>1.30) from starting

Last Working Server Version#:1.30.0.6486-629d58034
Last Attempted Server Version#:1.32.1.6999-91e1e2e2c/

Previously, I had run into the uncaught exception of type std::bad_cast error, and was not able to upgrade as a result. I initially suspected a library/dependency type of issue, as simply rolling back to 1.30.0 resolved the issue and punched out a couple of threads to solicit feedback on this error.

In the most recent builds, the error output changed, and so I enabled verbose logging via my Preferences.xml file. I am surprised the online docs gave no clear indication of how to do this, and only reference settings via the web interface, which remains unstarted in my scenario when this problem was occurring. Its always a coin toss searching within your knowledge domain, but GPT-4 saved me a few minutes here, was effective, and accurately beat my 5 minutes of support article and forum searching to drop in the specific XML parameter and value.

media:# cat /var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/Preferences.xml

<?xml version="1.0" encoding="utf-8"?>

<Preferences LogDebug=“true”

This gave me a larger set of errors on startup of v1.32, immediately pointing to database corruption; Something I had not seen before, and something that the v1.30.0 and earlier seemingly ignored, so I have no idea how long the problems I have may have existed in my DB. Presumably, they were created somewhere between 2016 (0.9.16.6.1993-5089475) and 2023 (1.30.0.6486-629d58034) as this system has been evergreening for some time now.

So I backed up my DB (and added a cron to do this monthly) and am working on resolving this issue so that newer PMS versions can run with my current DB. I really am only trying to preserve a decade of family watch preferences, that rarely are used in search, so a rebuild is not the end of the world. But I prefer to solve this type of problem, as the running v1.30.0 suggests I should be able to move forwards from a running DB.

media:# cat /var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/Logs/Plex\ Media\ Server.5.log | grep ERROR
May 22, 2023 09:17:54.157 [140508512029328] ERROR - SQLITE3:0x80000001, 779, index corruption at line 94622 of [a29f994989]
May 22, 2023 09:17:54.157 [140508512029328] ERROR - SQLITE3:0x80000001, 779, statement aborts at 48: [update directories set created_at = iif(typeof(created_at) in (‘integer’, ‘real’), created_at, strftime(‘%s’, created_at, ‘utc’)), updated_at = iif(typeof(updated_at) in (‘integer’, ‘real’), updated_at, strftime(‘%s’, updated_at, ‘utc’)), deleted_at = iif(typeof(deleted_at) in (‘integer’, ‘real’), deleted_at, strftime(‘%s’, deleted_at, ‘utc’))] database disk image is malformed
May 22, 2023 09:17:54.157 [140508512029328] ERROR - Exception inside transaction (inside=1) (/data/jenkins/server/3528069351/Library/DatabaseMigrations.cpp:291): sqlite3_statement_backend::loadOne: database disk image is malformed
May 22, 2023 09:17:54.161 [140508512029328] ERROR - Exception thrown during migrations, aborting: sqlite3_statement_backend::loadOne: database disk image is malformed
May 22, 2023 09:17:54.206 [140508433046328] ERROR - Thread: Uncaught exception running async task which was spawned by thread 140508512029328: std::bad_cast
May 22, 2023 09:17:54.208 [140508512029328] ERROR - SQLITE3:0x80000001, 1, no such table: metadata_agent_providers in “update metadata_agent_providers set online = 0”
May 22, 2023 09:17:54.208 [140508512029328] ERROR - Error: Unable to set up server: sqlite3_statement_backend::prepare: no such table: metadata_agent_providers for SQL: update metadata_agent_providers set online = 0 (N4soci10soci_errorE)

I used the SQLite tool to VACUUM and REINDEX, but stumbled on the UNIQUE contraint failed error below.

sqlite> VACUUM;
sqlite> REINDEX;
Runtime error: UNIQUE constraint failed: schema_migrations.version (19)

I discovered GitHub - ChuckPa/PlexDBRepair: Database repair utility for Plex Media Server databases and unfortuately this tool was unable to resolve the issue, and simply reported additional occurences in table fts4_tag_titles_icu_docsize.docid in addition to the one in schema_migrations.version

Error: near line 540097: UNIQUE constraint failed: fts4_tag_titles_icu_docsize.docid
Error: near line 540098: UNIQUE constraint failed: fts4_tag_titles_icu_docsize.docid
Error: near line 540099: UNIQUE constraint failed: fts4_tag_titles_icu_docsize.docid
Error: near line 540100: UNIQUE constraint failed: fts4_tag_titles_icu_docsize.docid
Error: near line 540101: UNIQUE constraint failed: fts4_tag_titles_icu_docsize.docid
Error: near line 580186: UNIQUE constraint failed: schema_migrations.version

My next step is to inspect these tables using DB Browser for SQLite, and see if I can eyeball some safe removals of the secondary/duplicate keys; assuming a load of a first key occurence is how v1.30.0 is working; I suspect it is not even using these tables from the main database at runtime and they are related to upgrades and secondary functions.

While I post here, I am also refreshing metadata and emptying trash across all my media folders, just in case those actions reduce or clear some of the problem rows.

I am hopeful somebody else has come across these errors due to new DB checks that now stop partially corrupt databases from working past 1.30, but yet allow a 1.30 downgrade to operate without any signs of trouble.

My best guess is that these types of errors come from unclean startup, shutdown and/or upgrades, that have surely happened over the many years, yet never resulted in a non-startup scenario.

Since there were only 307 rows, I simply sorted the version column and eyeballed for duplicate rows in that field in the schema_migrations table.

Sure enough, I found a duplicate. See attached screenshot schema_migrations_dupe.png showing the unsorted table. The duplicates both had no values in the other columns, so effectively represented something that was likley very safe to remove one of.

I wrote the change, then moved the db file back onto my server (headless) overwritting the old db. But when I started it back up, the GUI did not start. I already had an idea of what was next, and went directly to remove some 20 rows in the version column with values like 500000000000 and 500000000001, which seemed odd, as they were the only rows that did not follow the YYYYMMDDTT.XX format. I figured perhaps these were also corrupt in some way, noting they contained SQL commands related to the fts4_xxx table referenced in my other UNIQUE constraint error.

Moving the db back onto the server after the 2nd edit and restarting yeilded a working web page. On further inspection, it looks like the server started stepping through migrations on each subsequent restart. Here are my last 5 restarts (default log depth) during the move over of my two db changes.

#5 and #4 were both just restarts of the current version w/ the Db error - I now realize it was stuck doing the migration 202210260000 when it hit the error previously. I believe now, looking back, that it was just the logging of the migration that failed, due to the index problem.

May 22, 2023 11:50:18.017 [139695918926648] INFO - Plex Media Server v1.32.1.6999-91e1e2e2c - Debian GNU/Linux PC x86_64 - build: linux-x86_64 debian - GMT -04:00
..
May 22, 2023 11:50:18.147 [139695998675600] INFO - Running migrations. (EPG 0)
May 22, 2023 11:50:18.150 [139695998675600] INFO - Running forward migration 202210260000.
May 22, 2023 11:50:18.162 [139695998675600] ERROR - SQLITE3:0x80000001, 779, index corruption at line 94622 of [a29f994989]

#3 looks like it was stuck in a migration script loop, repeating a small set of scripts after my first DB edit and downgrading back to the working 1.30.0; Possibly I could have actually just restarted it again and again here without removing the second set of 5000000000X entries. I note the down migration possibly indicating it is downgrading the schema due to my moving back to 1.30 from 1.32.

May 22, 2023 12:17:04.686 [0x7ff5735fcb38] INFO - Plex Media Server v1.30.0.6486-629d58034 - Debian GNU/Linux PC x86_64 - build: linux-x86_64 debian - GMT -04:00

May 22, 2023 12:17:04.754 [0x7ff5771fc0d0] INFO - Running migrations. (EPG 0)
May 22, 2023 12:17:04.775 [0x7ff5771fc0d0] INFO - SQLITE3:0x80000001, 283, recovered 23 frames from WAL file /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plu
g-in Support/Databases/com.plexapp.plugins.library.blobs.db-wal
May 22, 2023 12:17:04.779 [0x7ff5771fc0d0] INFO - Running migrations. (EPG 0)
May 22, 2023 12:17:04.780 [0x7ff5771fc0d0] INFO - Running down migration 202212022300.031.
May 22, 2023 12:17:04.781 [0x7ff5771fc0d0] INFO - Completed down migration 202212022300.031.
May 22, 2023 12:17:04.781 [0x7ff5771fc0d0] INFO - Running down migration 202212022300.021.
May 22, 2023 12:17:04.781 [0x7ff5771fc0d0] INFO - Completed down migration 202212022300.021.
May 22, 2023 12:17:04.781 [0x7ff5771fc0d0] INFO - Running down migration 202212022300.011.
May 22, 2023 12:17:04.782 [0x7ff5771fc0d0] INFO - Completed down migration 202212022300.011.
May 22, 2023 12:17:04.782 [0x7ff5771fc0d0] INFO - Running down migration 202212022300.
May 22, 2023 12:17:04.782 [0x7ff5771fc0d0] INFO - Analyzing database.
May 22, 2023 12:17:04.786 [0x7ff5771fc0d0] INFO - Vacuuming database.
May 22, 2023 12:17:08.173 [0x7ff5771fc0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 12:17:08.176 [0x7ff5771fc0d0] INFO - Completed down migration 202212022300.
May 22, 2023 12:17:08.176 [0x7ff5771fc0d0] INFO - Running down migration 202212022200.031.
May 22, 2023 12:17:08.182 [0x7ff5771fc0d0] INFO - Completed down migration 202212022200.031.
May 22, 2023 12:17:08.182 [0x7ff5771fc0d0] INFO - Running down migration 202212022200.021.
May 22, 2023 12:17:08.183 [0x7ff5771fc0d0] INFO - Completed down migration 202212022200.021.
May 22, 2023 12:17:08.183 [0x7ff5771fc0d0] INFO - Running down migration 202212022200.011.
May 22, 2023 12:17:08.184 [0x7ff5771fc0d0] INFO - Completed down migration 202212022200.011.
May 22, 2023 12:17:08.184 [0x7ff5771fc0d0] INFO - Running down migration 202212022200.
May 22, 2023 12:17:08.185 [0x7ff5771fc0d0] INFO - Analyzing database.
May 22, 2023 12:17:08.190 [0x7ff5771fc0d0] INFO - Vacuuming database.
May 22, 2023 12:17:10.395 [0x7ff5771fc0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed

#2 - I believe this was just another restart in my sequence of testing for the web UI availability

May 22, 2023 12:34:20.154 [0x7f30187fcb38] INFO - Plex Media Server v1.30.0.6486-629d58034 - Debian GNU/Linux PC x86_64 - build: linux-x86_64 debian - GMT -04:00
May 22, 2023 12:34:20.154 [0x7f30187fcb38] INFO - Linux version: 12 (bookworm), language: en-US
May 22, 2023 12:34:20.155 [0x7f30187fcb38] INFO - Processor: 14-core Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
May 22, 2023 12:34:20.155 [0x7f30187fcb38] INFO - Compiler is - Clang 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
May 22, 2023 12:34:20.155 [0x7f30187fcb38] INFO - /usr/lib/plexmediaserver/Plex Media Server
May 22, 2023 12:34:20.176 [0x7f301d5440d0] INFO - [CERT/OCSP] Successfully retrieved response from cache.
May 22, 2023 12:34:20.223 [0x7f301d5440d0] INFO - Running migrations. (EPG 0)
May 22, 2023 12:34:20.248 [0x7f301d5440d0] INFO - Running migrations. (EPG 0)
May 22, 2023 12:34:22.130 [0x7f301512cb38] INFO - LibraryUpdateManager path watching is disabled
May 22, 2023 12:34:25.004 [0x7f301d5440d0] WARN - [DatabaseFixups] Held transaction for too long (/data/jenkins/server/3521777883/Library/DatabaseFixups.cpp:358): 0.310000 seconds
May 22, 2023 12:34:27.067 [0x7f3015e6bb38] WARN - [MediaProviderManager] Unrecognized MediaProvider feature: availability
May 22, 2023 12:34:27.067 [0x7f3015e6bb38] WARN - [MediaProviderManager] Unrecognized MediaProvider feature: availability-platforms
May 22, 2023 12:34:27.132 [0x7f3013443b38] WARN - [MediaProviderManager] Unrecognized MediaProvider feature: availability
May 22, 2023 12:34:27.132 [0x7f3013443b38] WARN - [MediaProviderManager] Unrecognized MediaProvider feature: availability-platforms
May 22, 2023 12:34:33.235 [0x7f3013977b38] WARN - [Req#6e] QueryParser: Invalid field ‘sectionID’ found, ignoring.
May 22, 2023 12:34:33.337 [0x7f3013977b38] WARN - [Req#6e] SLOW QUERY: It took 380.000000 ms to retrieve 50 items.
May 22, 2023 12:34:58.828 [0x7f3012a2cb38] WARN - [Req#f8] QueryParser: Invalid field ‘sectionID’ found, ignoring.
May 22, 2023 12:34:58.934 [0x7f3012a2cb38] WARN - [Req#f8] SLOW QUERY: It took 300.000000 ms to retrieve 50 items.
May 22, 2023 12:35:01.293 [0x7f30179d6b38] ERROR - MetadataAgentPostProcessor: failed to find agent for identifier com.plexapp.agents.imdb

#1 Right before my last restart, it is still running more migrations, but the database has not been changed by me. Here is when I actually notice some of these logs are “down migrations” whereas some are “forward migrations”.

May 22, 2023 14:51:26.170 [0x7f78661fcb38] INFO - Plex Media Server v1.30.0.6486-629d58034 - Debian GNU/Linux PC x86_64 - build: linux-x86_64 debian - GMT -04:00
May 22, 2023 14:51:26.171 [0x7f78661fcb38] INFO - Linux version: 12 (bookworm), language: en-US
May 22, 2023 14:51:26.171 [0x7f78661fcb38] INFO - Processor: 14-core Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
May 22, 2023 14:51:26.171 [0x7f78661fcb38] INFO - Compiler is - Clang 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
May 22, 2023 14:51:26.171 [0x7f78661fcb38] INFO - /usr/lib/plexmediaserver/Plex Media Server
May 22, 2023 14:51:26.193 [0x7f786aeaa0d0] INFO - [CERT/OCSP] Successfully retrieved response from cache.
May 22, 2023 14:51:26.239 [0x7f786aeaa0d0] INFO - Running migrations. (EPG 0)
May 22, 2023 14:51:26.243 [0x7f786aeaa0d0] INFO - Running forward migration 20210922132300.
May 22, 2023 14:51:26.266 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.270 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.273 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.277 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.281 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.285 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.288 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.290 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.293 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.295 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.297 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.299 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.302 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.304 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.306 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.308 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.311 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.313 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.315 [0x7f786aeaa0d0] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 14:51:26.317 [0x7f786aeaa0d0] INFO - Completed forward migration 20210922132300.
May 22, 2023 14:51:26.354 [0x7f786aeaa0d0] INFO - Running migrations. (EPG 0)

#In my last / current iteration I upgraded to the current version and sure enough more migrations happen AND it seems to be running migration scripts related to the 50000000X rows I deleted. It would seem in fixing this DB that is was a mere log, and not the source for all the migration scripts which all seem to have been applied now, errors or not.

May 22, 2023 16:01:58.791 [140406092188472] INFO - Plex Media Server v1.32.1.6999-91e1e2e2c - Debian GNU/Linux PC x86_64 - build: linux-x86_64 debian - GMT -04:00
May 22, 2023 16:01:58.792 [140406092188472] INFO - Linux version: 12 (bookworm), language: en-US
May 22, 2023 16:01:58.792 [140406092188472] INFO - Processor: 14-core Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
May 22, 2023 16:01:58.792 [140406092188472] INFO - Compiler is - Clang 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
May 22, 2023 16:01:58.792 [140406092188472] INFO - /usr/lib/plexmediaserver/Plex Media Server
May 22, 2023 16:01:58.875 [140406152133264] INFO - [CERT/OCSP] Successfully retrieved response from cache.
May 22, 2023 16:01:58.920 [140406152133264] INFO - Running migrations. (EPG 0)
May 22, 2023 16:01:58.923 [140406152133264] INFO - Running forward migration 500000000000.
May 22, 2023 16:01:59.100 [140406152133264] WARN - Held transaction for too long (/data/jenkins/server/3528069351/Library/DatabaseMigrations.cpp:291): 0.180000 seconds
May 22, 2023 16:01:59.222 [140406152133264] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 16:01:59.226 [140406152133264] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed
May 22, 2023 16:01:59.230 [140406152133264] INFO - SQLITE3:0x80000001, 17, statement aborts at 58: [select * from metadata_items limit 1] database schema has changed

All to say, I am up and running again on my original database, with the current Debian PMS packages and passing integrity tests

media:/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases# /usr/lib/plexmediaserver/Plex\ SQLite /var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db
SQLite version 3.39.4 2022-09-29 15:55:41
Enter “.help” for usage hints.
sqlite> PRAGMA integrity_check;
ok

It is worth noting that I discovered older copies of my DB (2020, 2021) in the same directory as my database, presumably created by the Debian install script or Plex during significant db schema changes that may not be downgradable. I had not searched very hard, as one of the functions of the PlexDBRepair tool is to look for and/or try restoring backups → but it must be in a different context to these ones I found as it was able to find my db automatically, but not these backups in the same folder.

-rw-r–r-- 1 plex plex 151770112 May 22 16:02 com.plexapp.plugins.library.db
-rw-r–r-- 1 plex plex 539601920 Dec 28 2020 com.plexapp.plugins.library.db-2020-12-28
-rw-r–r-- 1 plex plex 541475840 Dec 31 2020 com.plexapp.plugins.library.db-2020-12-31
-rw-r–r-- 1 plex plex 540361728 Jan 3 2021 com.plexapp.plugins.library.db-2021-01-03
-rw-r–r-- 1 plex plex 540453888 Jan 6 2021 com.plexapp.plugins.library.db-2021-01-06

Those file names look like Plex’s standard database backups. PlexDBRepair tool will only list backups that have an associated blobs database (com.plexapp.plugins.library.blobs.db-*), so if those aren’t also present, that would explain why the tool didn’t list them. They also might be a good indicator of when corruption first started, assuming you have ‘Settings > Scheduled Tasks > Backup database every three days’ enabled. Plex will only keep around the last 4 automatic backups, but the process can fail if corruption is detected. If your latest backup is from January 2021, that could be when your database first started having issues.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.