Not enough disk space to convert this item

Server Version#:4.132.2
Player Version#:7.23.7.9393-2951ee29d
<If providing server logs please do NOT turn on verbose logging, only debug logging should be enabled>

I am running Plex Media Server on my Ubuntu 18.04 box, using my CPU for transcoding (Ryzen 9 5900), and have my transcode location set as a folder on my internal spinning disk drive that has 1.1Tb of available space, and my media stored on a Synology NAS, this has been my setup for 7 years without issues.

I have been having issues intermittently with getting the not enough space to convert error, which have been resolved by rebooting my server, but the past few days that is not helping, and the list of effected files is growing. First just OTA recorded media on the Roku’s, but not on my PC, then on my PC also, then ripped media such as TV shows but not movies, then movies as well. I found if I shut down my server and start (vs restarting alone) I could restore all media types except for OTA recordings on the Roku’s. I cannot for the life of me believe it’s actually a space issue having so much available space, and none of the other fixes I’ve found for this issue have helped. I have rebooted my PC, the server, the NAS, the Roku’s, had everything off at once and brought everything back online, no change. I am not sure what logs you would need, I’ve tried to look through them and do not see any clues to the issue.

What is the full path of your transcode location?

~/documents/chris/dvr

Screenshot from 2024-08-08 19-41-22
Screenshot from 2024-08-08 19-43-48

It will bark about space when the permissions are wrong.

ALSO, Pointing PMS to your home directory is ill advised.
Some distros have strict rules about /home and won’t let user plex use your home directory (ACLs + perms)

Create a /home/plex with you as the owner and 777 permissions.
Now try that.

The way I have my system built is this:
-AMD AM4 system with Ryzen 9 5900x CPU and nVidia 980 and 32Gb RAM
-NVME as primary boot with EFI partition and Ubuntu 18.04 installed.
-Dual Seagate IronWolf Pro 2TB drives in RAID 1 as local data store.
-I have redirected all local folders onto this drive so as to only have
OS/programs/desktop stored on the NVME, and all other files onto the
RIAD, meaning ‘~/documents/chris/dvr’ is on the internal RAID that all files
are located on. Do I have this wrong?
-Synology NAS for Media data store
-Plex Media Server installed on main PC accessing NAS for data.
-HDHomeRun for OTA data for PlexDVR.

I did check on permissions, they seem to be 777? Create, modify, and delete for all users.

Screenshot from 2024-08-08 21-51-10

Taken from right clicking on the folder and choosing open in terminal.

Stupid quiestion time.

where’s the transcoder temp?
Is there more than 2.5x the size of the input file available ?

Screenshot from 2024-08-08 21-58-42

How do you mean redirect ? Symbolic link?

I created a folder on the RAID, then

I had it at /home/documents/chris/dvr, by way of ‘~/documents/chris/dvr’

Should I change that to ‘home/documents/chris/dvr’ knowing that ‘documents’ is located on the mapped local drive?

Trying to route things around like that won’t work.

I have my computer’s normal /home/chuck
My NAS’s (name Glockner) RAID volume is named /vol on the NAS.
I NFS mount Glockner:/vol/media (where I put all my media mounted as /glock/media).

All the other shared folders from my NAS are still in /glock with all my media folders under /glock/media.

[chuck@lizum glock.1999]$ ls
backup/  cakewalk/    git/  math/   pfsense/  plexinc/  qa/        test-files/  vmhdd/
bin/     colleagues/  iso/  media/  plex/     primo/    software/  usenet/
[chuck@lizum glock.2000]$ 

PMS on this computer, finds all my movies on /glock/media/movies

I don’t put transcoder temp files over the network. There is a problem with file locking ( the file server cannot lock the transcoder’s temp files with exclusive access and guarantee it )

Notice it’s not anywhere near my home directory.

Linux is very strict on these things.

Take a look at this and tell me if it makes any sense to you ?

  1. The main “Linux Tips” I’ve written. A collection of How-To’s

  2. I use NFS for file sharing. Here’s an example of how to use NFS for shared folders on the PMS host using NFS

  1. If you’ve got a Windows file server , it would be done this way.

I’ve been thinking about this:

You should go to that mapped drive and find the top level directory.
That top level name is what you should mount on linux.

Doing it that way, you have a 1-for-1 clear directory structure which doesn’t go through anything else it shouldn’t be.

Re-reading that, I think I explained my setup incorrectly, it’s not a network mapped drive, it’s a shortcut to an internal hard drive.

I have an NVME boot drive (C:/), but the local folders are not on that drive, they are on another drive in the PC (D:/). I have put folders on the D: drive called ‘Documents’, ‘Music’, ‘Pictures’, etc, and then put shortcuts to those folders on the C: drive. This way, when I click on the documents folder, I am calling up the folder on the D: drive, I’ve done this to limit the R/W on the NVME.

In this screen shot, you can see the shortcut arrows on the folders, those are all just on the D: drive, the NAS you can see is mapped.
Screenshot from 2024-08-09 15-04-53

What may not have helped is that I was calling my D: drive ‘The RAID’, possibly causing confusion between that and the NAS. A graphical representation of that would look like this:

C:/ This is my NVME boot drive
D:/ This is a pair of 2TB drives in a mirrored software RAID, mounted as D:

All files are on the D: drive, only OS and programs (plus desktop) are on the C:

@chris332277_yahoo_com

Are you using Windows?

This is a Linux forum.

Yes, I am on Ubuntu 18.04, just the easiest way I know how to describe my drive setup is with Windows terminology.

Here’s a screen shot of my drives, the 512 being the NVME, partition 6 being my primary production boot partition that I referred to as C:


, the 2 2TB hard discs being the 2 drives for the mirrored RAID mounted at /dev/md0 that I referred to as D:

Thank you.

Using Windows terms will confuse the heck out of folks (myself included)

For future reference, do not use “MBR” partition tables. They are for Windows.
“GPT” is for Linux. (“GNU Partition Table” = GPT). They support more than 4 partitions without “Logical Drives” nonsense.

can you show me the output from cat /proc/mdstat ?

What I see on your machine, which you should be able to confirm with df -h is:

  1. /dev/nvme0 – your boot / OS 512GB SSD
  2. /dev/sda & /dev/sdb - Your two ironwolf pro drives
    — in a RAID 1 (which cat /proc/mdstat will show us)
  3. Your resultant /dev/md/chris-16 (md0 raid)
  4. The BluRay drive.

Sound about right?

If so,

  1. PMS is installed on the normal default location (the OS drive)

  2. You’ve not moved PMS’s data directory away from it’s default by customizing it ?

  3. The next thing to figure out is if & where you have enough space for your transcoder temp. ( You might be running out of space )

Showing you the disk layout on my machine:

[chuck@lizum ~.1998]$ gog cat /proc/mdstat
Personalities : [raid0] [raid10] [raid1] [raid6] [raid5] [raid4] [linear] [multipath] 
md0 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[4]
      243539968 blocks super 1.2 [4/4] [UUUU]
      
md3 : active raid6 sdr[0] sdt[7] sds[8] sdm[2] sdn[5] sdk[3] sdj[4] sdl[12] sdq[11] sdo[1] sdp[6] sdi[10]
      117187532800 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
      
md1 : active raid10 sdh[3] sdg[0] sde[2] sdf[1]
      976508960 blocks super 1.2 16K chunks 2 near-copies [4/4] [UUUU]
      
md2 : active raid0 nvme1n1[1] nvme0n1[0]
      2000145056 blocks super 1.2 16k chunks
      
unused devices: <none>
[chuck@lizum ~.1999]$ 

md0 = RAID 1, (4 way mirror on 512 GB SSDs )
md1 = my ‘home’ partition where I do all my development work.
md2 = dual NVME SSDs for my VM storage (these are striped)
md3 = RAID 6 (12 x 12 TB array) for my media

This is how they are mapped into the filesystem

# /home (md1)
/dev/md1                    /home xfs  defaults,auto,nofail,noatime 0 2
#/dev/disk/by-uuid/bb93809e-9f83-4c04-95e9-f519eff64dc3  /home xfs  defaults,auto,nofail,noatime 0 2

# /mnt/vmssd (md2)
/dev/md2                   /vmssd xfs  defaults,auto,nofail,noatime 0 2
#/dev/disk/by-uuid/d9603f57-6824-4a6e-81e4-edf64f654634  /vmssd xfs defaults,auto,nofail,noatime 0 2

# /mnt/vol (md3)
/dev/md3                   /vol   xfs  defaults,auto,nofail,noatime 0 2
#/dev/disk/by-uuid/317bddcd-4a71-45c3-addc-a7bf47a6cffc  /vol  xfs defaults,auto,nofail,noatime 0 2

As always, thank you Chuck! I used MBR because back in 2016 I hadn’t yet made the switch and was still running on Windows primarily, this started out as a Windows machine on an Athlon based system that I installed a dual boot with Ubuntu 16.04 and have since migrated. Seems my upgrade to 22.04 is going to be more intensive that I had hoped.

I doubly thank you for the 4 partition differentiation info between MBR and GPT, I have been frustrated by that in the past

I will have to re-read your information again, but to answer your questions, I have this.


Does that help explain anything?

I do have installed Oracle VM BM box manager with a number of VM’s in there, I’m lead to believe those have something to do with all those /dev/loop lines?

Look at the line for /dev/nvme0n1p6 (the root ‘/’ filesystem)

It’s 100% use (full).

The machine can’t run because it’s full.

Also be very careful because the Synology volume is full too. (100%)

the challenge here is two-fold.

  1. The root partition is full. It’ll explode (crash permanently) soon.
  2. You mounted /dev/md0 the 1TB RAID as a subdirectory of your home directory. This is bad practice and will get you in trouble. RAID volumes should always be mounted as high in the tree as possible.
    I can help you.

I first need to ask how comfortable you are at the linux command line and then, all sincerity, how competent are you with your Linux mastery.

I ask this because there will be a fair amount of keyboard work to get this moved around.

I suspect Plex is what’s filling your root partition (common issue) which is easily fixed with clever command line work.

Lastly, All those /dev/loop lines are from snap packages (notice /snap in the directory names)