Moving posters and art to other hard drive

I have a Plex server on a VPS that has a small fast NVMe drive. The music itself is on another server that’s mounted via NFS. The metadata is taking up quite a bit of space on the VPS.

I’m wondering if there’s any way to move the posters/artwork (eg. Library/Application Support/Plex Media Server/Metadata/Albums/x/xxxxxx.bundle/Contents/tv.plex.agents.music/posters/ and .../art/) to another drive, while keeping the rest of the metadata on the smaller, faster drive.

If I just move the artwork myself (eg through a script) and symlink it from the old location, will Plex handle that properly?

Server Version#: 1.21.4.4079

Yes, using a Symlink will work to redirect files to another location.

Thanks for the reply. I assume Plex doesn’t have a built-in option for this at the moment (move just posters/artwork to a different location)?

Nope. It’s not a common thing to do so there isn’t much priority to add these types of advanced features.

The supported method is here:

This moves all the metadata, right? I’d like to move just the posters/artwork as they take up a lot of space, while keeping the smaller text files on the main SSD.

Correct, this does move everything.

Moving just the posters, or any sub-portion, isn’t supported because of all the inter-dependencies between metadata and agents.

We encounted a lot of problems when developing for DSM 7. We ran into inconsistencies with symbolic links as well as hard links (same volume).

Some where so severe realpath didn’t resolve.

Thanks. That makes sense. I was worried about the performance dropping a lot if I move the entire thing onto a different server and access it via NFS, but maybe it’s not too bad. Will anything bad happen if the Docker metadata is mounted via NFS?

I’m running Plex in Docker so it’s pretty straightforward for me to move the entire thing (just have to modify the volumes in my docker-compose.yml)

Thank you for that additional info.

Moving the entire “Library” to a NFS server is going to vulnerable to database corruption and a lot of latency.

I show how to do it but unless you’re running 10 GbE LAN, I urge against it.

The problem with it is network file locking is NOT mandatory. They are advisory only.

I’m on 10 GbE with jumbo frames (9000 MTU). Both servers are VPSes hosted with the same provider and AFAIK the two servers are in the same rack.

Even with NFSv4? I’m only running v4.

Then don’t do it.

Jumbo (9000) on 10 GbE? Respectully? Not necessary & fraught with issues.

Using stock MTU (1500)

[chuck@lizum ~.249]$ iperf3 -c 192.168.0.21
Connecting to host 192.168.0.21, port 5201
[  5] local 192.168.0.13 port 54824 connected to 192.168.0.21 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.09 GBytes  9.35 Gbits/sec    0   1.47 MBytes       
[  5]   1.00-2.00   sec  1.09 GBytes  9.41 Gbits/sec    0   1.60 MBytes       
[  5]   2.00-3.00   sec  1.10 GBytes  9.42 Gbits/sec    0   1.60 MBytes       
[  5]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec    0   1.68 MBytes       
[  5]   4.00-5.00   sec  1.09 GBytes  9.41 Gbits/sec    0   1.68 MBytes       
[  5]   5.00-6.00   sec  1.10 GBytes  9.42 Gbits/sec    0   1.68 MBytes       
[  5]   6.00-7.00   sec  1.10 GBytes  9.42 Gbits/sec    0   1.68 MBytes       
[  5]   7.00-8.00   sec  1.10 GBytes  9.42 Gbits/sec    0   1.68 MBytes       
[  5]   8.00-9.00   sec  1.10 GBytes  9.41 Gbits/sec    0   1.68 MBytes       
[  5]   9.00-10.00  sec  1.10 GBytes  9.42 Gbits/sec    0   1.68 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.0 GBytes  9.41 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  10.9 GBytes  9.41 Gbits/sec                  receiver

iperf Done.
[chuck@lizum ~.250]$ 

It’s a shared network (every customer in the same location uses the same internal network) so I don’t want to run NFS unencrypted, and don’t really want to deal with the pain of setting up Kerberos. I’m using WireGuard to encrypt the connection, and jumbo frames actually improved the speeds quite a bit, as WireGuard adds extra overhead per frame (so larger frames reduces the overheads a bit). Throughput over WireGuard via their internal network went up from ~800 Mb/s to ~2.4 Gb/s when I switched the MTU from 1500 to 9000

The host I’m using explicitly advertises jumbo frames as a supported feature on their internal network and I haven’t had any issues with it :slight_smile:

Thank you. Again, the additional info makes sense. I understand why your’re doing it.
I also use Wireguard now (pfSense).

It’s clear you know what and why you’re doing it. I apologize.

If you’re doing NFS, make certain you have synchronous NFS and have V4 (TCP) which will give you the POSIX locking you need to keep the database from scrambling when adding media (it doesn’t take long when adding because of the database transactions involved)

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.