1635766272 Jun 23 12:54 com.plexapp.plugins.library.blobs.db
12812288 Jun 23 12:56 com.plexapp.plugins.library.blobs.db-shm
1674104576 Jun 23 12:56 com.plexapp.plugins.library.blobs.db-wal
505413632 Jun 23 13:50 com.plexapp.plugins.library.db
539992064 Feb 11 22:56 com.plexapp.plugins.library.db.original
131072 Jun 23 13:50 com.plexapp.plugins.library.db-shm
495421568 Jun 23 13:50 com.plexapp.plugins.library.db-wal
Is the WAL file bigger than the DB? (what’s why I asked for the ls -la of the whole dir)
I asked because there is work being done on the DB which explains the long transaction times.
It looks like the wal file is larger on the blobs.db file but not on library.db
A nice way to control pulling that all together…
- Don’t use systemd to stop plex this one time.
kill -15 $(pidof "Plex Media Server")- Now watch for the processes to end normally
- When down in a controlled manner, all the DB files should be pulled together.
(WAL and SHM) will be gone) - Restart plexmediaserver
- Now you can start and “Optimize Database”
1635851264 Jun 23 14:08 com.plexapp.plugins.library.blobs.db
32768 Jun 23 14:11 com.plexapp.plugins.library.blobs.db-shm
0 Jun 23 14:11 com.plexapp.plugins.library.blobs.db-wal
505217024 Jun 23 14:11 com.plexapp.plugins.library.db
32768 Jun 23 14:11 com.plexapp.plugins.library.db-shm
33021576 Jun 23 14:11 com.plexapp.plugins.library.db-wal
crap… not good.
Shutdown PMS.
While shutdown, see if they fold back in.
If not, the next step is to use sqlite3 to manually vacuum
familiar with it?
I’m actually not. have a link?
edit, I am vacuuming the db’s after I’ve backed them up, and I’m using the plex sqlite3. it cleaned up those files, but they are now back, and I notice this in the plex server log:
INFO - [Database optimization/com.plexapp.plugins.library.db] SQLITE3:0x80000001,
17, statement aborts at 57: [select * from metadata_items limit 1] database schema has changed
That statement is normal.
If there were any pending migrations to be completed, those would also be performed
They would generate that statement.
I was preparing the example for you but found a problem with Plex SQLite on NAS platforms.
It went through the loudness scans and didnt reproduce the kernel panic. I found this in the logs :
ERROR - Resources:CPUUtilization: Unexpected result when reading /proc/58027/stat
That file is no longer on the system so I cant tell you what it reports.
I’ll report back in this ticket if I can reproduce the issue. And at least you have a fix for something out of this thread.
Thanks for your help.
Thank you.
I’ve found some weirdness in the SQLite tool. (if you don’t invoke it right , it gets wonky). I’ve submitted that.
OK, got another kernel panic. in the plex log there is only a few entries:
WARN - [Transcode] Got a request to stop a transcode session without a valid sessi
on GUID.
WARN - [Transcode] Denying access to transcode of key /library/metadata/243778 due
to terminated session
do you want the kernel stack trace, too?
kernel: Plex Transcoder: Corrupted page table at address 7ff171fdc5fd
kernel: PGD 800560098067 P4D 800560098067 PUD 800493934067 PMD 8000369f0067 PTE 8000800051e8e867
kernel: Bad pagetable: 000d [#1] PREEMPT SMP NOPTI
kernel: CPU: 10 PID: 59121 Comm: Plex Transcoder Tainted: G T 5.18.6-gentoo-x86_64 #1
kernel: RIP: 0033:0x7ff17542d14b
kernel: Code: 4c 89 fa e8 b7 30 03 00 4c 89 ef e8 75 e4 ff ff 49 89 dd eb 25 31 c9 41 8a 45 fd 24 1f c0
e1 05 08 c1 41 88 4d fd eb 12 31 c0 <41> 8a 4d fd 80 e1 1f c0 e0 05 08 c8 41 88 45 fd 4c 89 e8 48 83 c4
kernel: RSP: 002b:00007ffc2e58d690 EFLAGS: 00010202
kernel: RAX: 0000000000000f05 RBX: 0000555556ae41d0 RCX: 00007ff17204affc
kernel: RDX: fffffffffffff05b RSI: 000000000004a000 RDI: 00007ff171fdc000
kernel: RBP: 00000000000005f0 R08: 0000000000000000 R09: 0000000000014820
kernel: R10: 0000000000000001 R11: 0000000000000246 R12: 000000000006f000
kernel: R13: 00007ff171fdc600 R14: 000000000006da57 R15: 00007ff17214578f
kernel: FS: 00007ff175472808 GS: 0000000000000000
This time it didn’t hang the box
Let me get someone from Engineering to stop by.
I’m still holding to the fact the kernel is the problem.
NO USERSPACE PROCESS can corrupt a valid kernel.
From engineering, just here to second what Chuck said. Bugs in user-space apps don’t affect the kernel, unless the kernel has a bug.
Thanks, I’ll see what I can come up with when it is working. this last time it didn’t hang the box though. I have no explanation but there have been lots of kernel bumps lately.
Hint: Kernel bump = bug fixes
It needs to be ‘dry behind the ears’ for a while.
indeed, some silently patched ![]()
One thing that the Plex Transcoder process does which interacts with the kernel differently that other processes is access a GPU for hardware transcoding. You may have an issue in the driver for the GPU or the GPU itself.
This is a ZEN 1, I think ZEN 2 has hardware transcoding. So I think it’s all CPU doing the lifting.
This is a NUMA box
As far as I’ve found now, it looks like this might be a file system corruption issue with XFS on my / filesystem. I’ve ran xfs_repair and it’s come back clean. I’ve booted back into 5.18.6 with kcrash enabled, so we’ll see.
As far as the -WAL, -SHM files go, I revacuumed the db’s and the files were removed. I had to roll back to a backup from early today on of one DB to save my bacon. I’ve since launched plex with the -wal/-shm files gone, and pruned the other files I created previously. Since start up, I looked at the directory contents and it showed both wal/shm files on all three db’s again. I manually ran optimize db from plex web and the files are still there for all three db’s.
I’ll keep you posted with any further updates I’ve found.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.