I am still experiencing issues and I think I found a lead we can pull on. When I launch the server, it still takes about 10 minutes to actually be available on the network. I checked the firewall and confirmed it’s disabled (for the time being) but I’m not too sure what else to check. What I did note, was that when I connect to http://localhost:32400/web, it shows my server as being “Nearby”. Is that supposed to show like that?
I am also still getting the consistent “connection timed out” messages. I found the below error in my logs:
Exception getting remote address: remote_endpoint: Transport endpoint is not connected
I also tried a new (separate) machine and setting up the Snap version of PMS (1.13.10.352), it doesn’t want to connect at all. Why I tried to reach it over local host, it gave me nothing. I tried loading the snap, made it through the first-time wizard, then it just sat there spinning/looking for the server (localhost). I then get a page saying “You must be lost” with a “Go Home” option (that doesn’t work).
All of this is on Ubuntu 18.04.1 LTS with all current updates.
Also to note, when I launch the server, in the first 2 minutes, it’s accessible, then it drops. ( I had 1 minute 44 seconds running and it worked. It then crashed within 10 seconds.)
Logs show:
Nov 08, 2018 20:47:15.208
ERROR
Error parsing XML response for update.
Nov 08, 2018 20:47:15.136
ERROR
Error parsing content.
Nov 08, 2018 20:47:15.135
ERROR
Error issuing curl_easy_perform(handle): 28
Nov 08, 2018 20:47:14.055
WARN
SLOW QUERY: It took 750.000000 ms to retrieve 50 items.
Nov 08, 2018 20:47:14.020
WARN
SLOW QUERY: It took 750.000000 ms to retrieve 50 items.
So I went in and checked the settings, they were all checked and scheduled to run from Midnight to 8 AM. I opened it up to Midnight-Midnight but I still see warnings. The errors did however clear.
Is there something I can do to fix the slower queries? I figured at first that maybe I was super fragmented so I did a bunch of Optimizations and they didn’t help. Queries are still long (same length) and happening often.
I also get a whole bunch of errors when transcoding…
|Nov 11, 2018 18:49:53.553|WARN|Held transaction for too long (../Library/DatabaseFixups.cpp:207): 2.030000 seconds|
|---|---|---|
|Nov 11, 2018 18:49:53.553|WARN|Took too long (2.010000 seconds) to start a transaction on ../Library/MetadataItem.cpp:7088|
|Nov 11, 2018 18:49:45.905|ERROR|[Transcoder] [mp4 @ 0x2425700] Application provided duration: -17 / timestamp: 16770744 is out of range for mov/mp4 format|
|Nov 11, 2018 18:49:45.697|ERROR|[Transcoder] [mp4 @ 0x2425700] Application provided duration: -17 / timestamp: 16674648 is out of range for mov/mp4 format|
|Nov 11, 2018 18:49:45.258|ERROR|[Transcoder] [mp4 @ 0x2425700] Application provided duration: -17 / timestamp: 16530504 is out of range for mov/mp4 format|
|Nov 11, 2018 18:49:44.971|ERROR|[Transcoder] [mp4 @ 0x2425700] Application provided duration: -17 / timestamp: 16434408 is out of range for mov/mp4 format|
|Nov 11, 2018 18:49:44.900|WARN|Took too long (0.140000 seconds) to start a transaction on ../Library/MetadataItem.cpp:925|
The transcoder errors are, sorry to say, bad input data. The transcoder is telling you the value it has encountered in the codec is illegal for that field. It will do the best it can but either is either damage or bad conversion artifacts.
as for the slow queries, there is one manual way to do it without the GUI.
Should that fail, there is nothing which can be done. It’s either the processor or something else at the system level.
I have my Sonarr/Radarr pointed to a set of scripts and they choose a codec. If i know what a “legal” codec would be for that field, I can see about changing the script to encode it in the correct codec.
However, ffmpeg is used by Plex and the scripts so I’m not sure why it’s an illegal value…
Regarding the slow queries, it may be the processors. I have 16 but they are sporadically maxing out. I’m using the following processor:
Intel® Xeon® CPU E5620 @ 2.40GHz
It’s virtualized using ESXi 6.5 on a Dell PowerEdge R710. RAM seems to be fine. 8 GB’s are allocated but it’s only using 1.7-3 GB at any point in time.
If you use Radarr / Sonarr, might I suggest mkvmerge and remux it rather than subjecting it to yet another encoding pass? Every time it’s decoded and encoded, quality is lost.
Ok, this raises another question… I keep my entire library as MP4 because I was advised that Plex can have issues with MKV. Both of your options point towards changing to MKV. Should MKV be the preferred container format?
Edited: Just to add, majority of my streams were external. I was advised that MKV cannot be direct played so MP4 was a better option as it lowers the load on the CPU.
If you want 4K, you will want fMP4 encoding . that can exist in the MKV file… PMS remuxes it on the way out. MKV files are MUCH easier to work with than any MP4
Chromecast
Xbox One
Playstation 4
Roku TV
Apple TV
Android
iOS
Majority are new devices. I can turn off the auto-converter and leave the media files as they are but I want to make sure I’m travelling down the right path (as I may not of been before…)
My CPU resources are listed above… Was there something further you were looking for?
Smaller is always good! I’m dealing with a fairly large library so any kind of conversion endeavor would be weeks if not months to accomplish. I’d likely turn off the conversion tool and leave it to whatever format it comes in.
I did notice that once I turned on the scripts, I had less issues reported externally. Granted with firmware updates and new technology, it may not matter as much anymore.
I’m not sure who mentioned MKV but there is no need to do it unless you want to have a file that is literally “copy & go” ready.
You’re familiar with the process of remuxing on ffmpeg?
ffmpeg -i filename.mp4 -c copy filename.mkv
Nothing gets changed except the file container .
The reason I stay with MKV is because of mkvmerge and mkvpropedit. No such tools exist and work so nicely in Linux (imho) as these two. Any file can be sliced, diced, and/or stream added/removed in moments.
I have tested my resources on the Linux guest hosting Plex and there are no issues. Even when Plex crashes, the resources are not taxed at all (RAM @ 20%, CPU @ 40%, no sudden increases or decreases).
One thing I did note, even though I rebuilt the Plex database, it’s 4 days later and it’s corrupt again. Once I rebuilt it, the issue went away for a couple days. Now that it’s back, it appears to be the result of a corrupt database. Is there a way to identify what is corrupting the database? Could it be frequent scans or metadata retrieval causing fragmentation?
Also - I found out that my conversion tool uses ffmpeg but the parameter for audio was set to ac3. As not all devices like ac3, that can cause issues. Tautulli shows me it’s transcoding everything because the audio doesn’t sync up (most devices are looking for aac audio). I changed the audio encoding to aac and I haven’t had any buffer issues for specific files, only the general problem. I also see Direct Play becoming the norm for new video files where old ones still transcode ac3>aac. So simple parameter adding more of a mess to things!
I have had power failures in the last few months and my backup power system was not in place at the time. So I’d say since January 2018, maybe 7 or 8 power failures.
If you go into Troubleshooting and “Download Database” along with “Down Logs” and then PM them to me, with a link to this thread for continuity, I will look and see what I can find. If I can’t, I’ll forward