Plex docker containers continuously corrupt databases beyond repair

Server Version#:1.18.1.1973

I’ve been having many issues with Plex containers. I’m using Unraid and I’ve tried three different docker containers, the official docker, the linuxserver one, and the binhex one that have all had this issue.

For all of them, I would download them and first set them up, passing through the location of my movies/tv shows. I would then add my movies library, and sometimes Plex will be able to gather all of my movies before encountering issues. If it worked, I would then add my TV shows library. Inevitibly, at some point in this process it will encounter a database corruption error. I will use the steps from here, and this would help for a bit. Later on as it continues adding media, it will corrupt the database again. At this point, following through the steps seems to work, and doing the final integrity check at the end will say that the database is ok. However, on attempting to start the docker again it will fail, and in the logs it will say that the database is corrupted.

I think this may be partially due to how much media Plex has to scan (2500 movies, 300 tv shows), but even breaking this up doesn’t solve this. I’ve tried optimizing my database and clearing bundles during the scanning process, but that has not seemed to have any affect. Further, before trying to set this up on unraid I was using it just on ubuntu, and that worked fine. The media content hasn’t changed since then, and I never had any database corruption issues before.

Is there something that I am doing wrong? I’m happy to provide logs, however I am unsure how useful they would be.

Look here:

Q7

What version of unraid are you running?
No new items were added that possibly have some strange characters?

I run unraid 6.7.2 and the latest Linux server docker with no issues. I’ve been on this setup for 2 or 3 years now.

Hey dane22,

Thanks for that link! I’ve tried that fix and I’ll circle back if everything works or does not work. It takes usually 24 hours for everything to get back into the database, so it may be a little bit.

Thanks again for the help!

1 Like

Hey nokdim,

I’m on 6.7.2, and tried the latest linux server docker as well. The only strange character I have is the upside down exclamation mark, but that was just fine in my other installs of Plex in the past. No clue why this is happening now, but I’m trying to install my config file on a hard drive directly instead of on the FUSE as dane22 recommended and this seems promising so far! Not sure if you have that for your setup.

1 Like

Mine is on FUSE under /mnt/user/Nokdim/Plex…

Hey dane22,

Unfortunately, I tried setting my config location on a disk as it said for that docker container, and while it worked better and I was able to add in all of my TV shows and movies, as soon as I needed to restart the container the database was corrupted. I tried the steps to fix it, and doing the integrity check shows it being okay, but in the logs the container fails to start as the database is corrupt. No idea what to do now.

Hey nokdim,

I tried setting it up not on FUSE, and that just failed. I’m unsure as to why my linuxserver docker is failing but yours is working fine. We’re on the same unraid version, and none of my settings seem that weird. My most recent attempt, I installed the docker using the comunity applications plugin, and have my version as docker, my paths set up for my movies and tv shows shares, no transcoding folder setup, and no NVIDIA devices. Config path is /mnt/user/appdata/plex.

Any idea what I should try differently? Or is there any logs I can get for you to help understand what is breaking?

Do you have a cache drive?

I do, so everything gets written to cache and the mover task moves it to the array after hours.

For years I didn’t have a cache drive but looking back I might have started to use docker in the last couple years after I added a cache drive.

Writing to a drive direct like you just tested would accomplish the same thing as what I do with my cache drive so I am not sure that would fix your issue.

Do happen to have any Plex log files that coincidence with the same time the DB gets corrupted? That might shed some light on what is causing the issue.

Also how much space did you allocate for the docker image under unraid settings?

Do you run any other docker containers like letsencrypt or sab, sonarr?

I do not have a cache drive, but like you said I’m not sure of the impact that would have in my case (as I’m not adding any new media).

I’m not sure if I have any useful log files as far as that goes, in the past when I’ve checked the logs all I will see is that it gets stuck failing to start once the database is corrupted. I can try to check this again, but from my current findings it seems to only have problems once I restart the docker. If I don’t restart it, it generally doesn’t run into issues, but as soon as it restarts the database begins corrupting. It may be getting corrupted before that, but it’s rare that this affects me as I’m testing it.

I allocated 20gb for my docker image, and in my docker settings it says that I have used 8gb of that. Not sure if I should allocate more space, and I’m a bit confused as to how I do (I’m guessing I need to stop the array for that).

I run lots of other docker containers, but barely any of them are setup yet (as I’m trying to get Plex working first). I have bazarr, deluge, bitwarden, grafana, handbrake, influxdb, lidarr, minecraftserver, nextcloud, ombi, openvpn, plex, sonarr, tautulli, and telegraf. Of those, the only ones setup are deluge, handbrake, minecraftserver, ombi, tautulli. Happy to delete any and all of them if you think that this might help solve my problem, as I’ve barely set up any of them.

EDIT: Just looked through the logs, found a lot of this error “Sqlite3: Sleeping for 200ms to retry busy DB.”. No idea if that is the culprit, or if this is unrelated. This was all while I was asleep, but this looks like it occured while it was scanning my TV shows, either right before or right after.

I’m on Unraid 6.7.2 and I never had any problems with the Plex docker. My dockers are running on cache (/mnt/cache) build from two M.2 devices as a BTRFS pool.

But I’m following the SQLite corruption threads since it’s beginning and what I read between the lines is: It never happens if the docker is running on cache with it’s share (appdata/system) set to Cache=Only or Cache=Prefer (if there’s room enough on the cache).

I think the error pops up if a SQLite database (Plex uses SQLite as it’s storage engine) is running on a disk (/mnt/disk1) or array (mnt/user) share.

The threads over at Unraid and Reddit are huge. Recent SQLite releases seem to work in a special way and that seems to collide with FUSE. I can’t explain that.

From the beginning I thought that disk shares would be safe, but the data corruption thread for the release candidates of the next Unraid version indicates that disk shares show that problem too:

Edit: I’m running the Linuxserver Plex docker installed via Community applications (the APPS tab on Unraids GUI). My Plex SQLite database is over 900 MB. Every night I do backup the database file to the array. In addition I take and copy a database dump to the array every night).

Thanks for the news! Great to now that my problem isn’t just me and is instead likely an issue with unraid. From looking at the subreddit and doing a bit more research, it looks like this issue is solved with 6.8 rc5. From looking at my unraid settings I’m not seeing how I can update to this. I’m still on the trial unraid (and my trial recently expired) so I’m not sure if that is the issue or if I have to change my branch from stable to next. I can also set one of my unused drives to be a cache drive, however this would require starting/stopping the array, which I can’t do after my trial expired.

Sorry for the possibly silly question and thanks so much for diagnosing the issue!

And 4 people following this:
https://us18.campaign-archive.com/?u=4ce73a4dbebfb261481909068&id=2169be9839&utm_source=share&utm_medium=ios_app&utm_name=iossmf

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.