Huge size of PhotoTranscoder folder

drzoidberg33, while reducing the cleanup trigger to one week (or even making it user-selectable) would be great, the issue is real unrestrained growth in the folder size. It seems to me that the real solution is to let users select a max size. You mention that disabling would result in a worse experience, but I can confirm that when my main HD is full because of this folder’s size, my users and I all have a terrible experience.

1 Like

Would it be safe to run an optimization app on the contents like JPEGmini? I looked at a bunch of the images which are basically TV covers and some were at 9MB-11MB which is really big for a screen display. Maybe Plex could look into some open source image optimization like JPEGmini to try and maintain the quality of the images but conserve space? Just a thought. My AppData folder is currently at 486GB with the PhotoTranscode folder taking up 117GB of that.

I’m performing an experiment to see if this works by using a network share to my mac to do the compression. The first thing I noticed is there are a LOT of duplicate thumbnails which I thought was to account per episode but now am seeing a bunch of large duplicate movie posters too. Doing a top directory search for “.jpg” yielded 52,745 images with the largest being 14.9MB for a TV show poster. 13,394 images are over 1MB in size which I’m assuming puts strain on the network when the interface is refreshing the dashboard or scrolling through content.

Well this is weird. The test failed because all of the “JPEG” images that have the .jpg extension appear to be PNG files when opened. Is there a reason the wrong file extension was used on all these? What happens if I convert them all to actual JPEGs does that blow everything up?

I strongly suspect that it would be fine, and I’m willing to let you try. :slight_smile:

ImageOptim alternatives for Windows and Linux

I wonder if it’s a Plex issue, or a metadata provider/API issue. I wonder if those are files that Plex is creating, and they’re being stored as the wrong type, or if they’re being uploaded and mis-named.

I have a quick temp fix on windows if you are running out of space on a small disk: move the cache to another larger disk via a symbolic link.

my cache was filling up my OS Drive! ~15GB here: C:\Users\user name\AppData\Local\Plex Media Server\Cache

I created a new directory to store the cache (since it will just auto-populate if you delete it) on a much larger disk. I called mine D:\plex_cache

i then deleted the current cache on the OS drive: C:\Users\user name\AppData\Local\Plex Media Server\Cache

and then I made my link via the command prompt by running:
mklink /D “C:\Users\user name\AppData\Local\Plex Media Server\Cache” “D:\plex_cache”

After browsing my library a bit, I looked in the new location & all the transcoding files started regenerating in the new location… on the larger, Non OS disk. Problem fixed for now.

You can also update the variables in the batch script below & run it:

@echo off

REM ****************************************************
set default_cache_dir=“C:\Users\user name\AppData\Local\Plex Media Server\Cache”
ECHO “default_cache_dir: %default_cache_dir%”
ECHO.
ECHO.

set new_cache_dir=“D:\plex_cache”
ECHO “new_cache_dir: %new_cache_dir%”
ECHO.
ECHO.

REM ****************************************************

ECHO.
ECHO.
ECHO “creating new cache location”
ECHO.
mkdir %new_cache_dir%

REM ****************************************************
ECHO “removing default plex cache location”
ECHO.
ECHO.
rmdir %default_cache_dir% /s /q

ECHO “making link from default location to new cache location”
ECHO.
ECHO.
mklink /D %default_cache_dir% %new_cache_dir%

exit

1 Like

OK Plex Employees… this is getting insane. You NEEEEED to give us a way to set a custom location for these caches. Small primary drives is not a new concept, and your caches take up WAY too much space. Maybe your 30 day cleanup is not enough to keep it at bay anymore. I deleted the PhotoTranscoder folder November 13, 2020, and today my entire server went offline cutting off my users, because the primary drive ran out of disk space. ~63gb in 3 months is an insane amount of caching.

3 Likes

A fix would be nice. I’ve had to move the Library folder to another disk manually. It would be nice to have a control over the maintenance of this folder. Even better would be the ability to change the location via the web interface, much like we can the Transcode folder.

This cache definitely could be smarter.

I did a quick spot check and noticed similar files to you.
I have 1 movie poster in 20 separate files. They’re all the same dimensions, MD5 hash, and file size, at 3.7MB a pop. Another movie i imported just yesterday already had 15 copies of its poster.

Surely something can be written to keep track of the md5 hash of these files (like an additional sqlite db just with md5->file mappings) and then check a hash exists before writing another file.

I suppose you’d be trading cpu processing time (in the md5 checks) for disk space and this is the decision to be made to pursue this.
An alternative would be to not do the md5 checking at write time but instead let the writes happen unimpeded as usual, and then have a backend scheduled task that does all the md5 hashing/mapping and consolidates to single files by hash, updating metadata links as it goes.

1 Like

I ran the jdupes duplicate file finder in my PhotoTranscoder directory.

Some files were duplicated hundreds of times, more than tripling the size of the directory.

I used jdupes -L -R PhotoTranscoder/ to convert the duplicates into hard links, and recovered the extra space. (I’ll delete the folder if this breaks anything.)

I tested by browsing with a few Plex clients. Everything seems to work. Of course, many new duplicates were created. :slight_smile:


My guess is that the PhotoTranscoder Cache filenames may be overly unique. I wonder if it’s generating filenames from File+Dimensions+Client+User, and I wonder if it should only be using File+Dimensions. (Or maybe they’re random!)

Is there any possibility it’s something like that, @drzoidberg33?

If that’s the case, it might also be a major speedup!

3 Likes

I actually thought of the hard links solution following my post. Didn’t see any potential problems there in the brief time thinking about it.
On Windows there could be some issues. They’d have to check the filesystem is NTFS and if not, fallback to previous logic, or another solution. I guess it could also be possible to hit the 1024 hard link limit per file there as well.

1 Like

A potential problem I can identify is if ANY of the files/links are changed, the contents of ALL of them are changed. Without knowing the expected cache semantics, that might be a good thing - or it might be unexpected.

I’m sorta convinced the problem is that “cache” files are created with identical content but different names. I imagine that the intent of the cache is to avoid doing unnecessary work. It seems like there’s tons of extra work going on.

(FWIW, jdupes accommodates the NTFS link limit.)

I’m running plexmediaserver on a centOS 7, minimal installation on a 20GB VHD, so disk space is a resource that I don’t have much to spare.

This is not a problem because all my content is on an external NAS, and I have used a symbolic link to store the metadata (/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/metadata) on a RAMdrive with the intent of providing the fastest served metadata I can.

Well, as we all know, at some point last fall, this all came to a screeching halt as my root drive ran out of space, and the culprit was indeed found to be the PhotoTranscoder cache folder.

I have plenty of content but only one user, other than myself, so it’s not like there is a heavy load on it, but it still caused my root drive to fill and then crash my VM.

My solution was to write a bash script and have cron run it every min.
It checks to see how much free space is available and if there is less than 500MB free, then it will delete the oldest directory to free up space. Every min it will delete another directory until there is at least 512MB free disk space.

Additionally, each min, it logs how much free diskspace, metadata and PhotoTranscoder cache is being used so I can see the history of how the cache grows. Since Nov 21, 2020, I have almost 200K lines (mins) of data.

I started with
0MB cache 1.3GB metadata and 8.4GB free
and currently have
5.95GB Cache, 7.08GB metadata and 1.96GB free

This includes one event where 3GB of PhotoTranscoder cache was deleted to preserve the system stability.

You should be able to adapt it for macOS pretty easily, although a real fix would be better.

To the developers I ask, what is this cache even for?

After writing this I’m now thinking of symlinking the PhotoTranscoder cache folder to /dev/null.

#!/bin/bash

rootFree=$(df --output=avail /dev/mapper/centos-root | tail -1)

epochSec=$(date +%s)
epochMin=$(expr $epochSec / 60)
readable=$(date +'%Y%m%d %R')

cache="/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/PhotoTranscoder/"
cacheSize=$(du -s  "$cache" | cut -d'/' -f1)

cacheP="/mnt/Plex/"
cacheSizeP=$(du -s  "$cacheP" | cut -d'/' -f1)

echo $epochMin $readable ${cacheSize} ${rootFree} ${cacheSizeP} >> ~/plexCacheBug.log


if [ ${rootFree} -lt 512000 ]
  then

    ##*** source: https://unix.stackexchange.com/questions/28939/how-to-delete-the-oldest-directory-in-a-given-directory

    dir="$cache"
    min_dirs=3

    
    [[ $(find "$dir" -maxdepth 1 -type d | wc -l) -ge $min_dirs ]] &&
    IFS= read -r -d $'\0' line < <(find "$dir" -maxdepth 1 -printf '%T@ %p\0' 2>/dev/null | sort -z -n)
    file="${line#* }"

    fileSize=$(du -sh  "$file" | cut -d'/' -f1)

    echo $epochMin $readable Free space less than 512MB: ${rootFree} Deleting: ${fileSize}  >> ~/plexCacheBug.log
    rm -rf "$file"
fi
1 Like

After my previous post I simplified my script to

#!/bin/bash

rm -rf "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/PhotoTranscoder/*"

Continuing to run it every 1 min, I have seen no ill effects.

The reason for this folder to exist at all is beyond me.

1 Like

This “PhotoTranscoder” folder is the only thing stopping me from sticking my entire library on a RAM disk. I’m actually happy I went with a cheap [open box, OEM] Intel S3700 SSD for the Plex library when I moved everything over to a rackmount server. There was less than 1 GB written/read when I bought it, likely from when the individual who sold it hooked it up to take CDI/CDM screenshots for me. 10 TB written, 2 TB read in two years. Seems peculiar for a database that is accessed far more often than it is modified. Not a whole ton of activity, but it’s definitely coming from somewhere.

“Library” folder is roughly 50 GB, and nearly half of that is “PhotoTranscoder”. Why?

Adding, or fixing, a scheduled task that “fixes” this doesn’t fix anything, it just frees the space up after the fact. What and who are all these 100% identical duplicates being transcoded for?

It took WizTree forever just to populate the library folder. Seriously. 46.52 seconds for a measly 50 GB folder. I can run it on an entire media library drive (16 TB, so 14.something TiB) in a fraction [0.13] of a second. What is that doing to the MFT?

What on God’s green earth would require this 9 MB poster to be stored 28 separate times? Why is it 9 MB in the first place? I’m pretty sure I fed it a much smaller JPG, if I even bothered to input my own poster at all, and this is approaching what a 2000x3000 PNG would in file size. If this is a file Plex is transcoding, i.e. it made it, what the hell is it doing? How is someone trying to load up a 9 MB thumbnail over their mobile data plan going to make for good UX? How much of the random stutter I experience in certain Plex clients can be attributed to whatever PhotoTranscoder happens to be doing?

For what it’s worth, the rest of the library folder is seemingly full of duplicates as well. I only let jdupes touch the PhotoTranscoder folder to create hard links, but e.g. the metadata folder has files that are identical, but only differ in folder structure. Metadata\TV Shows\some hex character\some hex string.bundle\Contents\com.plexapp.agents.localmedia\themes\some identical hex string, no extension, duplicated a dozen or so times over a different folder structure.

As to what jdupes did with \Cache\PhotoTranscoder, well…

236501 duplicate files (in 28822 sets), occupying 16584 MB

Tee-Object -file dumped stdout into a log for me too, although I probably should’ve remembered to enable it the first time when I only asked it to print a summary.

PS C:\jdupes-1.20.0-win64> .\jdupes.exe --link-hard --recurse --size "G:\Plex Media Server\Cache\PhotoTranscoder" | Tee-Object -file "C:\jdupes-1.20.0-win64\stupes.txt"
Scanning: 384709 files, 257 items (in 1 specified)

FastCopy now has hardlink support in Windows, so let’s look at the before/after de-duping just the \Cache\PhotoTranscoder folder in the grand scheme of Sync-ing/Backing Up my “Plex Media Server” folder.

Before PhotoTranscoder de-duplication:

TotalSize  = 47,243 MiB
TotalFiles = 710,135/30,339 (382,424)
TotalTime  = 02:36

---- Listing Done ----

There are 30,339 hard-links/junctions/symlinks already present in \Metadata\. Cheers to whoever actually made that happen automatically, thank you. I don’t think you got them all, though. Hard to see what all of them are.

TotalSize  = 31,427 MiB
TotalFiles = 473,634/266,840 (382,424)
TotalTime  = 03:04

---- Listing Done ----

That’s kind of ridiculous, especially when you consider you can just delete the entire PhotoTranscoder folder with no ill effects.

I’m seriously considering trying to be that cheeky on Windows. Unfortunately neither mklink /D G:\Plex Media Server\Cache\PhotoTranscoder\ NUL: nor /J work…but I don’t think that’s the end of that. Using /J to create a junction that leads to a really tiny partition, so Plex can’t tell that the PhotoTranscoder folder is effectively a black hole might work. That’s probably the best I’ll be able to get, outside of some permissions stuff that will probably make Plex throw a hissy fit over blocked writes.

Why Windows?

Oh, I’ll just use this cheap OEM Radeon Pro WX for transcoding. It’s cheaper than a Quadro, and if it doesn’t work right, I’ll just grab a Quadro instead and finally try Proxmox out. Worst case scenario, I slightly overpaid for a half height Radeon RX 550.

Except now the Radeon Pro is worth what two Quadros were worth then…yikes.

Same here.

3 Likes

I run Plex on a Windows 10 PC, 64gb SSD BOOT Drive, then 4 x 8TB SATA DRIVES for Media & Data.

At least every 2-3 months, often more i find my boot drive is full and i have to clear 30GB+ from C:\Users\chris\AppData\Local\Plex Media Server\Cache\PhotoTranscoder

I’ve read through this thread and seen lots of what looks like Linux commands, but is there an option to fix this on Windows, or move the folder, it could go on a mechanical drive but i assum will still keep filling the available space eventually?

Couldn’t you just stop plex on your windows server, delete the contents of the PhotoTranscoder folder and start plex again ? Doing that once a month should stop it going crazy :slight_smile:

Oh i can delete the folder. Just automating it would be preferable, and better still would be an ability in Plex to deal with it.

Seems a very common issue.

Plex won’t deal with it, they don’t see it as a problem.

I’ve automated it via a cronjob on my linux system to delete once a week. You’ll need to figure out how to do that on windows. I don’t run anything on windows so can’t help with that I’m afraid.

I just found this comment, which I think partially explains the issue. I think this is still accurate.

More reading:

PMS/Media/localhost/#/... bundle naming scheme? - #17 by Wolfie713