Plex Database keeps Corrupting

I have 2 Plex servers runing in dockers on UNraid (one is from the Linuxserver repo and the other is from the unraid/limetech repo). One as my main and the other as an emergency backup.

recently the main ones DB failed. The docker was running fine and updated (even tried Force Update). i followed the recover a corrupted DB steps (which didn’t work. Well the process change from corrupted to fine, but i still had no media).
I then rolled back a couple of DB in the appdata folder, which restored the DB.

However it has corrupted again. and now wont even appear as being online.

anyone else experiencing this issue?

or even better ideas on how to stop it from happening?

i can provide logs if required.

1 Like

Wow, I was browsing old threads looking to find some insight for this exact problem - can’t believe this was posted just half an hour ago (there also seems to be another post with the same experience). I also run unRAID (as a VM), and Plex has been working great until recently. Same exact issue - database corruption. Here’s what I know so far:

  • The database is being corrupted, but the corruption has manifested itself in various ways. Sometimes, I get a “database disk is malformed”. Sometimes, it’s just the constant “Held transaction for too long” until eventually, “database is locked” messages.
  • Half the time, StatisticsManager.cpp has something to do with it. I looked in the main SQLite DB and the statistics_media table actually grew by several million NULL records - I traced it to Roku I had left on - I guess it was doing some query and glitched. I SQL deleted the bad records, but that didn’t shrink the DB file and I was scared to run SQLite PRAGMA commands on it so I just restored from a previous DB backup.
  • Right now, I’m getting the “Held transaction for too long” message so Plex is still running fine, and the crazy thing is the main DB file hasn’t been written to in 2 hours (no media scans or Plex activity, so makes sense). Whereas the db-wal and db-shm files are constantly being written to. What are these? Perhaps they’re causing the constant lockups.
  • I thought all of this was a disk I/O bottleneck so I moved the app folder to an SSD (unRAID appdata), but the corruption still occurs during more intensive media scans or if I’m fixing a lot of metadata.

I think it’s due to a recent update to the software. This never happened before, and I also run another Plex instance on Synology Docker, stuck on an old version (1.10) and it’s humming along fine.

I’m backing up regularly and manually out of paranoia, and have had to restore twice. Current DB file (main one) is at 105MB.

@Mongo75 on the following thread says downgrading to a previous version seems to resolve the problem.

This past Sunday, I found this link: documentation/unraid.md at master · binhex/documentation · GitHub.

See question #7 in particular.

I made the recommended changes to my Plex, still under unRaid 6.6.7, and let it run over night. Yesterday morning, it was still running and looking good so I decided to update unRaid to 6.7.0. That was at about 10 in the morning. I’m still running with no database issues, 30 hours later.

This is my Step by Step:

  1. Go to Docker and stop Plex.
  2. Go to your appdata folder.
  3. Rename (or delete) your Plex folder.
  4. Either reboot now, or update unRaid to 6.7.0 and reboot.
  5. After the reboot or update, install and configure Plex. Before you click on “APPLY” and after configuring it to your liking, click the “Show more settings…” dropdown near the bottom of the screen. This is where you set your Plex folder location.
  6. You will see the default setting, /mnt/user/appdata/(container name) . You want to change this to /mnt/cache/appdata/(container name) if you use a cache drive, or to /mnt/disk(number)/appdata/(container name) if you don’t use cache. I don’t use cache, so in my case, I changed /config from /mnt/user/appdata/PlexMediaServer to /mnt/disk1/appdata/PlexMediaServer .
  7. Now click on “APPLY” to install Plex.

Once you’re satisfied everything is running smoothly, update unRaid from 6.6.7 to 6.7.0, if you didn’t do it in step 4, and everything should keep running smoothly.

Following the above Step by Step, I’ve been up over 30 hours with no errors, longer than it has ever run since initially updating unRaid to 6.7.0 back on May 12th. I hope this helps :slight_smile:

Sadly, my Plex just died again.
May 30, 2019 17:09:57.041 [0x14b38a9f4700] ERROR - SQLITE3:(nil), 11, database corruption at line 65066 of [bf8c1b2b7a]

I’m going back to unRaid 6.6.7.

I’ve been following this here, and on the unRaid forum. Lots of people having problems with their Plex database being corrupted after updating unRaid from 6.6.7 to 6.7.0. Long story, but I’ve been trying to find a solution since May 12th.

In my research, I came across a piece that said having your appdata spread across the array can cause this. The recommendation was to have appdata either on a cache drive only, or on a single dedicated drive in the array.

I rolled unRaid back to 6.6.7, removed the Plex folder from appdata, reinstalled Plex from scratch and ran it for two weeks just to make sure it was stable.

Before I did the rollback, I installed a cache drive and configured my appdata share to be cache only. I moved the appdata folder from each of my 14 drives to the cache drive. With everything running smoothly, I updated unRaid from 6.6.7 to 6.7.0 on June 13th.

That was one week ago today, and I’m still running error free.It seems that having your appdata folder set to cache only, or set to a single drive rather than spread across the array solved the problem.

I have this error as well running unraid 6.7.x, how do I ensure my data is backed up if i’m writing to cache or a specific disk? usually when I do that unraid says those folders are unprotected by parity

Jun 28, 2019 01:40:01.828 [0x14fb5d5ea700] ERROR - SQLITE3:(nil), 11, database corruption at line 79051 of [bf8c1b2b7a]
Jun 28, 2019 01:40:01.828 [0x14fb5d5ea700] ERROR - SQLITE3:(nil), 11, statement aborts at 24: [select count(*) from (select distinct(metadata_items.id) from metadata_items join metadata_items as parents on parents.id=metadata_items.parent_id join metadata_items as grandparent
Jun 28, 2019 01:40:01.829 [0x14fb5d5ea700] ERROR - Soci Exception handled: sqlite3_statement_backend::loadOne: database disk image is malformed
Jun 28, 2019 01:40:01.829 [0x14fb9e17b700] DEBUG - Completed: [192.168.1.167:57836] 500 GET /tv.plex.providers.epg.cloud:2/sections/2/all?type=4&beginsAt%3E=1561762800&beginsAt%3C=1561777200&mediaAnalysisVersion=1&sort=beginsAt&excludeElements=Actor%2CCollection%2CCountry%2CDirector%2CGenre%2CLabel%2CMood%2CPart%2CProducer%2CRole%2CSimilar%2CWriter%2CPhoto&excludeFields=file%2Csummary%2Ctagline (21 live) GZIP Page 0-6 100ms 405 bytes (pipelined: 1)
Jun 28, 2019 01:40:01.870 [0x14fb5e7f3700] DEBUG - Setting container serialization range to [0, 6] (total=4963)
Jun 28, 2019 01:40:01.871 [0x14fb9e17b700] DEBUG - Completed: [192.168.1.167:57835] 200 GET /tv.plex.providers.epg.cloud:2/sections/3/all?type=4&beginsAt%3E=now&sort=mediaHeight:desc,mediaAnalysisVersion:desc,beginsAt&excludeElements=Actor%2CCollection%2CCountry%2CDirector%2CGenre%2CLabel%2CMood%2CPart%2CProducer%2CRole%2CSimilar%2CWriter%2CPhoto&excludeFields=file%2Csummary%2Ctagline (21 live) GZIP Page 0-6 127ms 2200 bytes (pipelined: 3)
Jun 28, 2019 01:40:01.882 [0x14fb5cde6700] ERROR - SQLITE3:(nil), 11, database corruption at line 79051 of [bf8c1b2b7a]
Jun 28, 2019 01:40:01.882 [0x14fb5cde6700] ERROR - SQLITE3:(nil), 11, statement aborts at 32: [select distinct metadata_items.id from metadata_items left join metadata_items as parents on parents.id=metadata_items.parent_id left join metadata_items as grandparents on grandpar
Jun 28, 2019 01:40:01.882 [0x14fb5cde6700] ERROR - Soci Exception handled: sqlite3_statement_backend::loadRS: database disk image is malformed
Jun 28, 2019 01:40:01.882 [0x14fb9e17b700] DEBUG - Completed: [192.168.1.167:57838] 500 GET /tv.plex.providers.epg.cloud:2/watchnow/all?excludeElements=Actor%2CCollection%2CCountry%2CDirector%2CGenre%2CLabel%2CMood%2CPart%2CProducer%2CRole%2CSimilar%2CWriter%2CPhoto&excludeFields=file%2Csummary%2Ctagline (21 live) GZIP Page 0–1 132ms 405 bytes (pipelined: 1)
Jun 28, 2019 01:40:01.884 [0x14fb0a9f4700] ERROR - SQLITE3:(nil), 11, database corruption at line 79051 of [bf8c1b2b7a]
Jun 28, 2019 01:40:01.884 [0x14fb0a9f4700] ERROR - SQLITE3:(nil), 11, statement aborts at 32: [select distinct metadata_items.id from metadata_items left join metadata_items as parents on parents.id=metadata_items.parent_id left join metadata_items as grandparents on grandpar
Jun 28, 2019 01:40:01.884 [0x14fb0a9f4700] ERROR - Soci Exception handled: sqlite3_statement_backend::loadRS: database disk image is malformed
Jun 28, 2019 01:40:01.884 [0x14fb9e17b700] DEBUG - Completed: [192.168.1.167:57839] 500 GET /tv.plex.providers.epg.cloud:2/watchnow/all?includeCollections=0 (21 live)
GZIP 130ms 405 bytes (pipelined: 1)

Install the Community Applications plugin (https://forums.unraid.net/topic/38582-plug-in-community-applications/). That will give you the Apps tab. Click on the Apps tab and search for CA Backup. Install the CA Backup/Restore Appdata plugin. Go back to the Plugins tab and click on CA Backup/Restore Appdata to configure it.

At least, that how I do it :slight_smile:

I have been experiencing this issue as well! It never occured to me that it would be related to unraid- for the most part it’s always been rock solid for me. But my Plex database has been corrupted 3 or 4 times now since upgrading to 6.7.x. I just kept restoring and thinking it was something broken in the plex code. But after this last time I figured I should look into why it keeps happening… and found this thread.

I am copying over my app-data folder now to a share that won’t get spread across disks… really hoping this will be the solution. fingers crossed

I’m too currently moving the appdata share to the cache, in order to use /mnt/cache instead of /mnt/user for paths. Did using a share not spread across disks help @TimBeck2 ?

@Manchineel So far- it seems to have fixed my problems, however, I still noticed some oddities.

I have not had to restore from a backup since moving the appdata to a share on a single drive, however I still occasionally will get “database corrupted” messages from plex while it runs maintenance overnight. When I go check it out afterwards, everything is totally fine and I don’t have to do anything. I would never notice, had plex not sent those alerts.

My best guess is that overnight Unraid runs some kind of file system maintenance/combing, and it temporarily makes Plex think that the database has been corrupted, once that maintenance is complete- everything is hunky-dory.

To answer your question- yes moving the appdata to share on a single drive seems to have fixed my problems.

Good god, I was going completely insane because I kept reinstalling the same Plex which had been working perfectly fine for almost a year, and now it was acting non-deterministically. It would simply become unreachable from mobile, desktop and TV apps, and the hosted web app said the server couldn’t be found (absurd considering it was essentially saying it couldn’t see itself!). I think the event that triggered Plex starting to get its db corrupted was me finally adding a Parity disk. Since I only have 1 drive in my Unraid array, I suppose that until you add Parity it gets mounted differently. I’ve now replaced appdata paths with /mnt/cache based ones and kept multimedia and download directories as shares on /mnt/user (that’s why I bought Unraid after all). I’ll see what happens.
Thanks for replying!

@TimBeck2 just wanted to let you know that indexing my libraries was much faster and Plex feels overall more responsive so far. I’ll keep monitoring for corruptions. Thanks a lot for the suggestion.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.