Repairing a Corrupt Database - "Error: no such index: index_title_sort_naturalsort" & "Error: near line" ... ": UNIQUE constraint failed: statistics_bandwidth.id"

Server Version#: 1.18.2.2058
Player Version#: 4.16.2

I’m running through the Repair a Corrupt Database article with my unRaid server and have found many an error.

*** in database main ***
On tree page 129003 cell 12: Rowid 50 out of order
On tree page 129003 cell 10: Rowid 56 out of order
On tree page 129003 cell 9: Rowid 121 out of order
On tree page 129003 cell 6: Rowid 52 out of order
On tree page 129003 cell 5: Rowid 112 out of order
On tree page 129003 cell 2: Rowid 57 out of order
On tree page 129003 cell 1: Extends off end of page
On tree page 129003 cell 0: Extends off end of page
On tree page 128992 cell 41: Rowid 16084 out of order
On tree page 128320 cell 44: Rowid 78155 out of order
On tree page 139696 cell 13: Rowid 3 out of order
On tree page 139696 cell 12: Rowid 3 out of order
On tree page 139696 cell 10: Rowid 90 out of order
On tree page 139696 cell 9: Rowid 78087 out of order
On tree page 139696 cell 5: Rowid 3 out of order
On tree page 139696 cell 4: Rowid 78082 out of order
On tree page 139696 cell 0: Rowid 49 out of order
Multiple uses for byte 619 of page 139696
On tree page 128320 cell 41: Rowid 78077 out of order
Page 132780: free space corruption
Page 60093: free space corruption
Page 60189: free space corruption
On tree page 60386 cell 125: Extends off end of page
On tree page 60386 cell 69: invalid page number -1037499648
On tree page 60386 cell 62: invalid page number -1038089472
On tree page 60386 cell 55: invalid page number -1039203584
On tree page 60386 cell 48: invalid page number -1039793408
On tree page 60386 cell 41: invalid page number -1040186624
On tree page 60386 cell 34: invalid page number -1048968448
On tree page 60386 cell 27: invalid page number -1049361664
On tree page 60386 cell 20: invalid page number -1049951488
On tree page 60386 cell 14: Extends off end of page
On tree page 60386 cell 13: Extends off end of page
On tree page 60386 cell 7: Extends off end of page
On tree page 60386 cell 6: Extends off end of page
On tree page 60386 cell 0: Extends off end of page
On tree page 142996 cell 13: invalid page number -1375480320
On tree page 61098 cell 39: Extends off end of page
On tree page 61098 cell 37: Extends off end of page
On tree page 60514 cell 3: invalid page number 151126281
On tree page 60514 cell 2: invalid page number 151126299
Multiple uses for byte 711 of page 60514
database disk image is malformed

When attempting the second step of Running a Repair.

sqlite3 com.plexapp.plugins.library.db "DROP index 'index_title_sort_naturalsort'"

I get the error

Error: no such index: index_title_sort_naturalsort

Any help would be very appreciated!

1 Like

The purpose of that sqlite3 statement is to delete (“DROP”) that index from the database, if it exists. This error just confirms that it’s already gone. You’re fine to proceed with the remaining steps to attempt to recover your database. If it doesn’t succeed, you made a backup, right?

[Edit to add]
Also, several months back, UnRaid users were reporting persistent database corruption issues using Plex. I think this began after upgrading to 6.7.x. I don’t use it myself, but it was interesting to follow at the time. Here’s one of several threads discussing it:

Ahh. Oh yeah, I back up weekly and this morning happened to be the scheduled time so I didn’t need to manually back up.

Yeah, I came across that and reports to Limetech specifying that they have a fix for it in 6.8 rc4. That it shouldn’t corrupt further if some things were changed in the Plex Docker template, which I’ve done. I was just wanting to repair so I can update to the current release candidate to see if it resolves once and for all. :slight_smile: . Really appreciate the suggestion though! :slight_smile: .

Also appreciate the timely reply! :slight_smile: .

Got this running the final command.

sqlite3 com.plexapp.plugins.library.db < dump.sql
Error: near line 545215: UNIQUE constraint failed: statistics_bandwidth.id
Error: near line 545216: UNIQUE constraint failed: statistics_bandwidth.id

That one I’m not sure about. If it created the DB I’d go ahead and see if it is usable. You’ve got the backup, so I don’t think it can hurt anything. If it didn’t create the DB you may have to revert to an older, working copy.

I’ve been watching the directory when running these commands. Looks like it recreated the com.plexapp.plugins.library.db, but it’s 0KBs. :confused: .

Starting shows the libraries, but nothing else is populating. Looking at the PMS log shows that it’s looping “Starting Plex Media Server.”

Edit: Looks to be the same issue this person experienced, though didn’t receive assistance. Database corruption repair failed.

That error indicates that your database has 2 entries using an ID that is meant to be unique. Unfortunately, there isn’t an easy fix. You’ll need to identify the duplicate, remove 1, then try the repair again.

1 Like

Not quite sure I follow where I would find the duplicate? In my Plex library or in a database file? Or something else?

In the statistics_bandwidth table within your Plex database.

Sorry, still don’t know what or where you’re referring to.

If you didn’t understand, then it’s probably something you shouldn’t be trying to fix yourself. Doing the wrong thing to your database file can really mess it up. PM me your database file and I’ll take a look for you.

I’m usually good when pointed in the right direction given enough details. I can always restore a backup if I break anything. Although, I’m happy to take any help you can provide.

Looks like this isn’t very straightforward either. I went to Settings > Troubleshooting > Download Database and downloaded an empty zip.

I saw you made a download tool, but that looks to be for Windows and last updated years ago. I could send a zipped copy of my database folder/main DB file, but I assume that wouldn’t be anonymized.

The repair procedure you tried didn’t work so the new database it created is empty. You’ll need to put back the original database.

You’ll need something that can view an SQLite database. I use a program called DB Viewer for SQLite. Shut down PMS then use the program to open your database. Under the browse option, find the table called statistics_bandwidth and look through it. The first column is the ID. See if you can find the duplicate ID. Delete one of those rows. Save your changes, exit the program, restart PMS.

That is the restored database before any SQLite commands were initiated.

Cool, I’ll play with that and update on how it goes.

Now that I understand what half of these commands do in the Repairing Database article. I’ve checked before and after the database is attempted to be rebuilt and it either comes back with the following or a blank database file.

I thought it would be more exciting than this, but there doesn’t look to be any duplicates.

I’m thinking there is maybe a duplicate created when dumping the com.plexapp.plugins.library.db to the dump.sql?

Edit: Was able to open the dump.sql
These are the two lines indicated above that were the cause of the error.

545214 | INSERT INTO statistics_bandwidth VALUES(78111,1,129,4,'2019-06-10 02:00:00',1,338962);
545215 | INSERT INTO statistics_bandwidth VALUES(78110,5900019,129,4,'2019-06-10 01:00:00',0,2640);
545216 | INSERT INTO statistics_bandwidth VALUES(78109,5900019,129,3,'2019-06-10 00:00:00',0,2640);
545217 | INSERT INTO statistics_bandwidth VALUES(78108,5900019,129,2,'2019-06-10 00:00:00',0,2640);

Edit 2: Since none of these lines were exact duplicates, I attempted deleting and checking to see if I would get the error again.

Didn’t log the first attempts I did, but it ended in the error coming up for a different location, so I recreated the dump.sql and started over again.

Deleted both 545215 & 545216 to get the same error specifying lines 545167 & 545168 so I deleted the db created and removed both of those lines.

Annoying thing I can’t seem to figure out is that when the dump.sql is recreating com.plexapp.plugins.library.db it gets to about 130,000 / 140,746 KBs (dump.sql) & 167,012 KBs (com.plexapp.plugins.library.db.original) and then goes to 0 KB…

The IDs are the 781010, 78109 values. The error is not meant to identify the 2 duplicates, just that these are duplicates of an earlier entry.

The process actually “dumps” all the info into the database, then at the end saves the file. If there was a problem during this, it can’t save the changes and can result in a 0 file. There were no other errors?

Ohhhhh. I thought you meant the line was the id. Completely spaced that you meant the first column of the value. I’ll check this out.

Nope, just the two sets of statistics_bandwidth errors.

Okay, now that I’m back to this. (Had some issues with PMS looping starting up and had to wait for a preclear to finish before I could reboot and restore a backup)

So I was able to find the duplicates id 12623 & 12624.

503046 | INSERT INTO statistics_bandwidth VALUES(12623,1,19,-2,'2016-09-01 00:00:00',1,2422766393305);
503047 | INSERT INTO statistics_bandwidth VALUES(12624,1,19,-4,'2016-09-01 00:00:00',1,199623);
...
518756 | INSERT INTO statistics_bandwidth VALUES(12624,1,'local://70997',6,118000,0,'2019-06-10 11:34:15');
518757 | INSERT INTO statistics_bandwidth VALUES(12623,1,'local://72147',NULL,NULL,1,'2019-06-10 11:31:52');

Finding the duplicates and deleting the later lines, saving, then rebuilding com.plexapp.plugins.library.db caused it to 0KB again.

After spending some time investigating further and comparing the dump.sql to another Plex library, I found two “/****** CORRUPTION ERROR *******/” lines at the beginning and end of the statistics_bandwidth section (didn’t end up making a difference from what I can tell) and changed the end line from ROLLBACK to COMMIT.

I was able to rebuild the database and get Plex working again with no issue! (Might have had to restore an appdata backup through unRaid and copy back over the rebuilt DB.

Anyhow, I don’t know if that was the right way to do it, please let me know if you see any potential problem with these troubleshooting steps.

ROLLBACK is used to revert the changes if there is something wrong with the dump file. You basically told it to ignore the error and finish what it started even if there are errors. Comparing those lines above, the duplicates are definitely wrong. The data are in the wrong order. If there are other entries following that pattern, that could explain the corruption error you are seeing. Having this bad data may mess up your bandwidth statistics but shouldn’t affect how PMS works.

Oh okay. How would I go about resolving this? Sort everything in statistics_bandwidth by the ID?