Plex Filling up Disk with TBs of Data until the Disk is Full

I’m running Ubuntu 16.04 and Plex Media Server is filling up my drive with literally TaraBytes of unknown data which clears as soon as service plexmediaserver is restarted. I believe this is related to the DVR function on Plex because some recordings are erroring out with “There was a transcoder error” and others don’t stop recording when the show is finished. Nothing fixes this issue except a restart of the service but then it’s only a matter of time before Plex fills the drive up again until there’s zero space.

Here you can see the disk is 100% full but then when I restart the plexmediaserver, and I did nothing except restart the service, it immediately frees up over a TB of disk space.

This is also what it looked like before and after restarting the plex media server:

And this is what makes me think it’s related to the DVR function in Plex. The time is 11:05pm, you can see a show that ended at 9pm is still stuck in the recording state, another show that ended at 11pm has a “transcoder error” the most useless of error descriptions, and then another show has just started recording as expected.

I am experiencing this, as well. It’s happened twice. After the first error, I deleted the directory and restarted plexmediaserver, and it happened again a week later.

Details:
Linux / Ubuntu PMS (version not specified, but current as of 3 weeks and 1 week ago).
Logs in /plex/livetv/.grab/[hexstring]/ShowName.log fill up with hundreds of gigs of the the same line over and over until the disk is full.
Tuner: WinTV DualHD-USB

Unfortunately it looks like I deleted the log snippet that I thought I’d saved. If it happens again I will update. Is anyone else seeing this?

Also seeing this. Please see attached logs.

This is happening when I DVR a movie: it looks like the commercial skipping process (or some other post-processing) is failing to make progress at some point during processing, and keeps writing a line similar to [mpeg2video @ 0x491dbe0] qscale == 0 over and over, causing the /path/to/library/.grab/<hash>/<name>.log file to grow without bound.

The symptom in the UI is that the entry for the recording in the DVR shows as lit/in-progress, but 100% complete. TV recordings have been working fine for me; so far I’ve only seen this on movies.

Because Plex clears the .grab/ directories upon restart, it destroys the offending media and log files. I made sure to grab a .ts and .log snippet this time, though.

Attached are the usual logs in the .zip. The additional plex-dvr.log file is the first 15,000 lines of the <name>.log file, up to where it got stuck and started repeating. It was 35GB at the time I stopped it, with the rest consisting of repeats of the same line. I also have the .ts if desired.

Server is quad-core Intel i7-4770S, Linux 4.15.9, Docker image plexinc/pms-docker:plexpass, reported version 1.12.1.4885, on a 16TB Linux software RAID6 volume formatted XFS.

Tuner is an HDHomeRun Connect.

Heh, I just tried playing the .ts through Plex (Chrome on Linux) and it crashed the transcoder. The actual video stream was badly corrupted for the portion that did play, presumably due to my having poor reception for whatever channel that was. (Tuner is on a DTV antenna, so I get variable reception.)

So this might be comskip or transcoder choking on a crappy video stream.

Thanks for reporting this. I am referring it to the development team

Is there a ts file available to reproduce the problem with?

@almdlm said:
Also seeing this. Please see attached logs.

This is happening when I DVR a movie: it looks like the commercial skipping process (or some other post-processing) is failing to make progress at some point during processing, and keeps writing a line similar to [mpeg2video @ 0x491dbe0] qscale == 0 over and over, causing the /path/to/library/.grab/<hash>/<name>.log file to grow without bound.

The symptom in the UI is that the entry for the recording in the DVR shows as lit/in-progress, but 100% complete. TV recordings have been working fine for me; so far I’ve only seen this on movies.

Because Plex clears the .grab/ directories upon restart, it destroys the offending media and log files. I made sure to grab a .ts and .log snippet this time, though.

Attached are the usual logs in the .zip. The additional plex-dvr.log file is the first 15,000 lines of the <name>.log file, up to where it got stuck and started repeating. It was 35GB at the time I stopped it, with the rest consisting of repeats of the same line. I also have the .ts if desired.

Server is quad-core Intel i7-4770S, Linux 4.15.9, Docker image plexinc/pms-docker:plexpass, reported version 1.12.1.4885, on a 16TB Linux software RAID6 volume formatted XFS.

Tuner is an HDHomeRun Connect.

Would need to refer the matter to Comskip. Would it be possible to provide a ts file that results in this behaviour ?

I’ve turned off commercial skipping, and I have not seen this issue since 3/27, where I was previously seeing it nearly every day.

@thagale said:
I’ve turned off commercial skipping, and I have not seen this issue since 3/27, where I was previously seeing it nearly every day.

Would you be prepared to re-enable and let me have the file that causes it ?

Sure, I can try, though the file is likely quite large. Will reenable and try to save the file. Will likely be Monday or Tues before I can see another failure.

@thagale said:
Sure, I can try, though the file is likely quite large. Will reenable and try to save the file. Will likely be Monday or Tues before I can see another failure.

Thanks - perhaps you can upload the .ts file to Dropbox or similar service and send me a link once you know comskip ends up with the logging loop

@sa2000 The .ts for one of my incidents is here:
https://drive.google.com/file/d/1jHIWmZFq8f2s6HEq61VcINhksJBXpgug/view?usp=sharing

Please let me know once you’ve downloaded this so I can delete it and reclaim the space in my account.

@almdlm said:
@sa2000 The .ts for one of my incidents is here:
https://drive.google.com/file/d/1jHIWmZFq8f2s6HEq61VcINhksJBXpgug/view?usp=sharing

Please let me know once you’ve downloaded this so I can delete it and reclaim the space in my account.

Thank you - I will let you know once the development team have downloaded it

@almdlm said:
@sa2000 The .ts for one of my incidents is here:
https://drive.google.com/file/d/1jHIWmZFq8f2s6HEq61VcINhksJBXpgug/view?usp=sharing

Please let me know once you’ve downloaded this so I can delete it and reclaim the space in my account.

We have downloaded the ts file - you can delete now from your Google Drive

Thank you for your help

@sa2000 Great – thanks!

@almdlm said:
@sa2000 Great – thanks!

Hi - the devs have not managed to get the issue with that file - the log is only a few gigabytes

Could you confirm it does
Upload part of the log file to compare

May be this ts file is not one that reproduces the problem

@sa2000 Not quite sure I understand what you’re asking, but I attached log files to my first comment on this thread. (Look above.) Those log files are for the .ts I shared.

@almdlm said:
@sa2000 Not quite sure I understand what you’re asking, but I attached log files to my first comment on this thread. (Look above.) Those log files are for the .ts I shared.

The problem we are trying to solve and get a file for, is of a bad .ts file leading to ad infinitum logging by comskip until the disk fills up completely ie continuous non stop logging

The .ts file that you provided only wrote a 35 gb log file and did not go on forever
Whilst that in itself is a problem - it is not the problem that was reported here

It did go on forever. The only reason it didn’t fill my disk is because I caught it, and killed it.

If you’re asking me to let it fill my disk, that is not a thing that is going to happen. :slight_smile: And it wouldn’t provide any additional info anyway.

@almdlm said:
It did go on forever. The only reason it didn’t fill my disk is because I caught it, and killed it.

If you’re asking me to let it fill my disk, that is not a thing that is going to happen. :slight_smile: And it wouldn’t provide any additional info anyway.

Thanks - are you saying you got that file to go beyond 35Gb into the log file?
what platform is it on?