Server Version#: 1.15.3.876
Player Version#: 3.83.2 (problem doesn’t seem related to players)
Since updating to version 1.15.3.876 I notice frequent increased server activity while nobody is watching (fan on computer spinning up) and the following in the logs.
Apr 11, 2019 09:17:50.178 [0x7fdb3deb8700] INFO - Linux version: 16.04.6 LTS (Xenial Xerus), language: en-HK
Apr 11, 2019 09:17:50.178 [0x7fdb3deb8700] INFO - Processor Intel(R) Core(TM) i7-4770R CPU @ 3.20GHz
Apr 11, 2019 09:17:50.178 [0x7fdb3deb8700] INFO - /usr/lib/plexmediaserver/Plex Media Server
[...]
Apr 11, 2019 13:30:42.450 [0x7fdb2d5df700] WARN - Held transaction for too long (../Statistics/StatisticsManager.cpp:253): 2.300000 seconds
Apr 11, 2019 13:31:14.658 [0x7fdb3ceb6700] WARN - Held transaction for too long (../Statistics/StatisticsManager.cpp:253): 2.190000 seconds
Apr 11, 2019 13:31:46.911 [0x7fdad37fe700] WARN - Held transaction for too long (../Statistics/StatisticsManager.cpp:253): 2.250000 seconds
Apr 11, 2019 13:32:19.220 [0x7fdada7fc700] WARN - Held transaction for too long (../Statistics/StatisticsManager.cpp:253): 2.290000 seconds
I tried repairing DB which doesn’t seem to be the problem.
I also don’t know what logs I could post to help looking into this.
Would downgrading be possible with Plex or is this a no-go?
Do you by any chance have a music library? I have moved my MP3s out of the music library and it may have stopped the problem from occurring. Could be unrelated though.
I will post logs when I got time. I’ve downgraded to 1.4.x after 5 days on 1.5.x as I’ve noticed huge spikes in CPU usage. Like 5min average with 98% cpu usage. Running P2000 with both decoding and encoding working. When I decided to pull the plug on 1.5.x, I watched Plex Media Server.exe using 50% CPU with one stream going. Direct stream video, transcoding dts to ac3 5.1 and transcoding srt to webtt (?). For a CPU with 10k+ passmark that seems a bit much.
@SVKTV,
You are referring to Windows. This is a Linux thread. Linux GPU HW transcoding works entirely different than Windows. Please ask in a Windows thread.
@ChuckPa
Attached my PMS log file for Ubuntu 18.04. Within a few hourse I have 5000+ of these “held transaction for too long” in the log and my CPU is spiking every about 10-20 seconds (fan speeding up). This is happening while nobody is playing and there is no activity on PMS.
@ChuckPa I can spin up a test instance, no problem. But know that I did a complete barebone reinstall on that machine earlier in an attempt to try fix the issue. I installed Ubuntu 18.04 from scratch, installed Plex from scratch, then moved my settings and media folders. The issue is still present.
So for the test instance without the DVR, can I use my settings (since they contain DVR) or should I simply use a plain new installation of Plex and throw my media at new test libraries?
Again, sorry for the late reply and thanks for helping me solve this. It might be useful for other Plex users too.
Btw, how do you know it’s DVR/EPG related? I fail to see that in the logs.
I’m also getting this… my server has been pretty much unusable for the past week as I’ve been trying to fumble with this…
Jul 08, 2019 17:29:39.917 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:40.242 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:40.265 [0x153f64f9e700] DEBUG - HTTP 200 response from GET https://plex.tv/media/providers?X-Plex-Token=xxxxxxxxxxxxxxxxxxxx
Jul 08, 2019 17:29:40.588 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.180000 seconds
Jul 08, 2019 17:29:41.020 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.220000 seconds
Jul 08, 2019 17:29:41.349 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:41.408 [0x153f3ffff700] WARN - Waited one whole second for a busy database.
Jul 08, 2019 17:29:41.634 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.130000 seconds
Jul 08, 2019 17:29:41.924 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.150000 seconds
Jul 08, 2019 17:29:42.253 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.190000 seconds
Jul 08, 2019 17:29:42.666 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.210000 seconds
Jul 08, 2019 17:29:43.000 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.150000 seconds
Jul 08, 2019 17:29:43.315 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:43.626 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:43.636 [0x153f3ffff700] WARN - Waited one whole second for a busy database.
Jul 08, 2019 17:29:43.940 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:44.267 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.170000 seconds
Jul 08, 2019 17:29:44.425 [0x153f663d9700] DEBUG - Auth: Refreshing tokens inside the token-based authentication filter.
Jul 08, 2019 17:29:44.426 [0x153f663d9700] DEBUG - HTTP requesting GET https://plex.tv/servers/9b8306f6fe5b098c031cd47bbbeae7fe2a5f35d0/access_tokens.xml?auth_token=xxxxxxxxxxxxxxxxxxxx&includeProfiles=1&includeProviders=1&unknownToken=xxxxxxxxxxxxxxxxxxxxd08-4ddb-8d5a-eeeddc43557f
Jul 08, 2019 17:29:44.606 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.190000 seconds
Jul 08, 2019 17:29:44.955 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:45.265 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.140000 seconds
Jul 08, 2019 17:29:45.584 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.160000 seconds
Jul 08, 2019 17:29:45.864 [0x153f3ffff700] WARN - Waited one whole second for a busy database.
Jul 08, 2019 17:29:45.908 [0x153f6479a700] WARN - Held transaction for too long (../Statistics/StatisticsMedia.cpp:29): 0.170000 seconds
I’ve got debug logging turned on and can provide log dumps and a DB dump if it will just get this resolved… My server is usable for roughly 10-15 minutes after a restart and then it’s basically just a log file full of this ‘Held transaction…’ mess… I did try wiping out my DB, which sucked to have to add back to my shares with people, but even worse it messed up the 3rd party logging because the item IDs had changed of course… So I just restored my DB, it had only added about 20% of my library back in after 2 hours scanning… but it was working “fine” for that period. I have an exceptionally large database, but I’d really been hoping there wasn’t a limit to this thing…
Last version without these messages for me is 1.14.1.5488. I’ve tried every version, but ended up downgrading because of other bigger issues (cpu usage upnp, cpu usage srt->ass transcoding).
Yes, there are a number of those scattered around in there:
Jul 09, 2019 08:17:10.464 [0x153f0a3f1700] WARN - SLOW QUERY: It took 330.000000 ms to retrieve 50 items.
Jul 09, 2019 08:17:10.499 [0x153f087e3700] WARN - SLOW QUERY: It took 210.000000 ms to retrieve 1 items.
Jul 09, 2019 08:17:10.692 [0x153f087e3700] WARN - SLOW QUERY: It took 680.000000 ms to retrieve 0 items.
Jul 09, 2019 08:17:10.858 [0x153f087e3700] WARN - SLOW QUERY: It took 630.000000 ms to retrieve 0 items.
Edit: Sorry, forgot to mention, this is on Version 1.16.2.1297, server is dual Xeon 2.40GHz, 48GB RAM, running on plexinc/pms-docker:plexpass.
Ok gang, How about we come back through this from the top?
“SLOW QUERY” is the first indicator and where attention should be focused first.
Diving down to the bottom first isn’t productive.
“SLOW QUERY” is triggered when the time it takes for PMS to go to the DB,
a. database not being optimized (Scheduled Tasks - AND server running at that time)
b. too much media indexed for a single thread of the CPU to get in and back out within the time limit
Please tell me the following information:
Specific CPU chip model? “Xeon” is like saying “Ford”
Obtain a count of the total number of items indexed
To obtain the number of items indexed:
find /dir1 /dir2 /dir3 -type f -print | wc -l
Replace /dir1 dir2 /dir3 with the actual top level directories. Add as many TLD’s as is required.
Which CPU are you running with this? SQLite3 is going to require a fast cpu to get through that many. Frankly stated, 727 MB is huge. Insanely huge to be completely candid. This is borderline commercial size.
Also when is the last the optimization and clean bundles done?
How many of those titles have multiple languages ?
Each language found in a file adds a DB record
Each subtitle found in a file adds a DB record
How many extras are added to the title ?
There is also a fundamental question here: Is PMS being used for the purposes it was designed?
Raw number of items indexed: 201589
Number of items excluding srt, sub and idx files: 142482 (I apparently still have Subzero grabbing .FR and .NL subs from when I first added it years ago)
Optimization and cleaning is done weekly, as everything is checked in scheduled tasks, and the server is on 24/7, running on Unraid 6.7.2, dual Xeon E5620, load is currently at 40%. What SQLite query would I run to easiest count all the records that you care about? It’s just strange that it’s only started appearing to hard “crash” recently like on June 28, though I’d been having some occasional stability issues prior, perhaps mid-June , the server would be sluggish and sometimes fail to respond, so I’d give it a restart and it would be good for a couple days…
It’s my belief that PMS is being used for the purposes it was designed, I’m an archivist and a collector, I hear about a show and will attempt to gather it for research and review, I collect eclectic and obscure things, from various parts of the globe. DVR changed the game for me, I can and do record pretty much everything I can, this even allows me to catch up on local news when I’m away from home, and I can archive CBC broadcasts that I never would have found online…
Should these all come back within expected range, the next logical step is to simply recompact the entire DB manually. The “Optimize Database” function does a decent job but it’s not as good as exporting and recreating the DB properly packed with no internal SQLite3 links between record groups.