BUG: 20MB XMLTV file size limit

Server Version#: 1.15.4.994

I’ve created my own XMLTV file from a number of sources to support the 400 or so channels I can get. Unfortunately, when I generate more than about 3 days of data (more than 18-19 MB) Plex will refuse to import the guide. Checking the logs shows it believes the XML to be truncated at a certain point in the file. In fact the XML is completely valid, and the point in the file changes based upon how many days of data I generate. From testing I’ve found that a file bigger than 20MB will always fail. This is annoying as I have over 7 days of guide data, yet can only have 3 in Plex at any one time. Other XMLTV clients do not exhibit this behavior.

I can provide a working and non-working file if necessary.

1 Like

Please do! Zip it up, then send to me per PM, please!

This bug continues in: 1.16.1.1291-158e5b199 on Ubuntu 18.04

Server shows: write: connection reset by peer (works with any other client)

Plex console shows:
Error — Error issuing curl_easy_perform(handle): 23
HTTP error requesting GET http://xxx/epg.xml (0, No error) (Failed writing body (0 != 1638

The error is nearly exactly at 1024 * 1024 * 20 bytes.

I’ve had to cut my guide down to 1 day to allow it to work.

I’m seeing this as well on 1.16.2.1297 - any fix for this yet?

plexmediaserver-1.16.1.1291-158e5b199.x86_64 on CentOS 7.

I have a 57mb XMLTV file locally (downloaded & massaged via another process) that loads just fine.

That error smells to me like you’ve got a filesystem that’s out of space.

I’ve got plenty of space left on my drive, but I’m wondering if there’s a bug in their curl process when its pulling it down live versus having another process suck it down and massage it so it’s ‘local’ versus remote.

curl is a pretty well known process. How’s your /tmp filesystem? I’d expect temp files to be written to /tmp or possible /var/tmp. Any chance that you’ve redirected one of those to a ramfs filesystem?

As the OP here I’ll say that I have no drive space issues and I think the problem may be Windows specific… My Plex server is Windows and thus there is no ramfs or specific drive for temporary storage.

Not Windows-specific. I’m on Linux… docker image via unRaid.

The error as posted by getnosleep just reeks of an out of space issue somewhere. Some block function calls return number of bytes successfully read/written, and it looks like it wanted to write 16k and did nothing.

[jkalchik_hushmail_com] that’s very surprising that you have a file bigger than 20mb working on Linux (or Windows)

I still have the issue with version: 1.16.3.1402-22929c8a2 on Ubuntu 18.04

This definitely doesn’t seem like a filesystem out of space issue, unless plex sets up a cgroup limiting it or something non obvious – I am not utilizing tmpfs for /tmp/ and my /tmp/ has 153 Gb free.

curl -sk ‘url’ > /tmp/working_on_plex_server has no problems pulling the full file into /tmp/ on the plex box itself.

There are no other logs in syslog et. al. that indicate anything other than this being a bug with how plex is implementing libcurl. The server side only shows issues delivering the file to the plex server, curl, wget, firefox, chome etc do not have issues pulling the file.

None of the other tmpfs file systems appear to be involved or be anywhere close to utilizing space.
tmpfs 1.6G 4.2M 1.6G 1% /run
tmpfs 7.9G 8.0K 7.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
tmpfs 1.6G 16K 1.6G 1% /run/user/121
tmpfs 1.6G 0 1.6G 0% /run/user/1000

Looked up the error number plex is outputting: CURLE_WRITE_ERROR (23) while I agree it kind of seems like an out of space error, the fact that it’s exactly at 20mb, that size limit never changes and there’s no file system limits close to that makes it seem like a buffer issue to me. Perhaps you’re gzipping your file served to plex or something? I’ll try messing with compression next maybe.

I’m going to admit to hallucinating… my .xmltv file isn’t exceeding the limit you’ve noted after all. Still, having said that, if you send me a copy of the file, I can try to load it here.

Thanks, I’ve sent you my tvguide xml guide data which is over 20mb.

To update everyone else, I’ve used strace to follow what happens when an over 20mb EPG is loaded by plex. The process loading the xml file simply closes the file handle after a full 16kb read as soon as it crosses the 20mb barrier with out any other system calls.

[pid 6921] recvfrom(124, “\t<desc lang=“en”>Witness how far”…, 16384, 0, NULL, NULL) = 16384
[pid 6921] close(124)

This is further evidence that plex simply stops processing after 20mb. That this is just a hard coded limit and is the reason this is on every platform.

There are days…

The file I received from you is 29,913,196 bytes in size. The file that my PMS loaded this morning is 92,392,131 bytes, 3x your file size (12+ days of data for 100 channels.) :frowning: And I don’t find any log files here that contain ‘Failed writing body’.

And worse… your file loaded successfully here. :frowning_face: And I’ve successfully reloaded mine back up.

PMS here is on an Intel Core i5-6400, CentOS 7, 16gb RAM, nVidia GT730 video adapter, 120gb SSD boot, 1tb rotating drive for cache & other local storage, and a Synology DS1817+ array w/ 11tb usable.

Wait a sec… by that strace output, recvfrom is returning 16384, which would match the len argument to it. That should be a nicely filled buffer. The error mentioned earlier referenced ‘0 != 16384’ which is something else. In the case of an error, recvfrom returns -1, yet another different value (also not uncommon for an error return.)

Something odd is happening, no argument there. I took a quick look through your EPG file, and found a few instances of the buffer string listed in your strace output. It’d be quite interesting to see what’s 16k forward from that leading tab. Wonder if there’s a non-printable character…no, if that were the case, I wouldn’t be able to load it either…

Yeah I’m lost – the strace shows that the recvfrom works and then PMS calls close() – the error doesn’t seem to match what you’d expect from strace output.

PMS here is on an Ubuntu 18.04 - AMD Ryzen 5 1600 Six-Core Processor, 16gb ram, RX580 video, 256gb boot, 3tb hard drive.

It seems like we need someone from Plex Inc to take a closer look and at a minimum add more debugging statements to figure out why they think they didn’t get a full read.

Any update from Plex on this issue?

I have the same thing occurring, if I use 2 days of data (18mb file) Plex works beautifully. But if I try to use 3 days, Plex cannot load the guide, I receive same HTTP get error as logs posted above.

Running Windows 10 and 1.16.3.1402

It’s really annoying to have my system fetch 250 channel guides every 2 days… I’d rather run it once a week.

Hoping someone from Plex can confirm this as an issue and that they’re working on it.

I have finally loaded over 20MB on Ubuntu 18.04 with the latest beta that came out today: 1.16.4.1469. Hopefully it works for you too!

Spoke too soon, had no guide data today and it has failed today and repeatedly over the last few days with a 21,067,612 byte file

20.09MB We need someone from plex to either add more logging to to do something to fix this.

I’ve been able to get a 28MB file to load every once and a while, its not consistant. Took me a while to realize why my weekly guide data didn’t always load. Its gone downhill in the last rev where it will never load any xmltv file unless its less then 50 channels and one day (8mb) with all the extra EPG123 options enabled. So I was convinced its file length not file size.

I’ve found more oddities here as my < 20MB updates failed the last 3 days again. Plex erroneously decides everything is cached, every time I change the data it’s still just assuming it’s 100% cached:

EPG[xmltv]: Total time to load EPG was 4.3 (HTTP details cached 100.0%, CloudFlare grid cached: 0.0%, 0 HTTP errors)

A temporary workaround I’ve found is to delete the entire contents of this directory every time your guide updates:

“/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/Http/tv.plex.providers.epg.xmltv/”

That forces Plex to actually download the data. This is a horrible workaround.

Is there any way to get Plex to acknowledge and look at fixing problems? This forum seems like the only way and also seems completely ineffective.

1 Like