NVME wear and tear like normal SSDs with lots of read/writes?

I started getting “failed to download codec” and playback errors and in the end it was a failing SSD. When the transcode directory was changed to something else, all was good. To mediate for the time being I setup a RamDisk for 32 of my 64 gigs of memory for transcoding.

I hear a lot of good and bad things about this. I’m debating buying (A) another SSD or (B) or an M.2 NVME drive. I know lots of read/writes to SSD wears it out fast (this drive went bad in under a year). I would stick to the Ram drive but I often have 15-20 friends streaming from my server so even at low quality 3gig files my 32 gig ram drive probably isn’t enough.

Does M.2 NVME drives wear and tear like normal SSDs? If not, I may just buy one of those and install it just for transcoding purposes. Thoughts? Any better route? For now I’m keeping the ramdisk going until I decide which route to take…

Specs: AMD Ryzen 9 3900 12 Core CPU, 64 Gigs DDR4 PC3600 Memory, P2000, 1000 Watt Platinum Corsair PSU, Win 10 Pro. System is very new and upgraded

IMO it doesn’t matter whether you get SSD or NVE, both will wear out eventually.

You are better served by buying more RAM and enlarging your ramdisk.

I do not have as many users, but I run transcoding on a ram based tmpfs (linux) that uses 32 out of my 64 gig of ram, and I have not had any problems with transcoding running out of space.

before you do ANYTHING you need to verify whether or not the current ram drive size is affecting any users.

guessing simply leads you to spend money where none is needed.

finally, transcoding temp is generally not an extreme high IO load, you can throw an old HDD (not SDD) to keep the wear off SSD and not use any ram drive.

the transcode folder does not have to be any faster than your upload speed.

1 Like

I did have like 12 streams going last night - 7 of which were transcodes. The ram drive was 21 of 23 gigs free so maybe I just leave it as is unless I see it starts to ever get really low on space?

My CPU is a monster - I could have 15+ streams going and CPU is rarely above 20% usage, and I’m on fiber internet so upload bandwidth isnt an issue

plex will trim transcode chunks as needed when low on transcode space, but yes monitoring the ram drive usage is a good indicator when you start hitting some kind of limit.

with fiber internet (gbit upload speed I assume), you should also consider ‘educating’ your users that are transcoding to adjust their client settings to avoid transcoding where possible.

ie set remote quality = original/maximum

you still may not be able to avoid all transcoding, but getting your users to fix their clients will go a long way.

Ok, I’ll keep an eye on my current usage and only upgrade if it looks to regularely start using a high percentage of the 24 allotted gigs. As for fiber - yes gigabit. It’s more like 800/800ish typically, but plenty.

As for setting up for ideal Direct Play - I am very on top of at least TELLING them to set it to Original/Maximum for Remote Streaming (based on Tautulli stats showing transcoding) - it’s if my asshole friends take the time to do it, ha. Most do, but some never seem to. That, or they claim they did and maybe an update reverts it back to defaults. Does Plex updates every revert it to the default 2Mbps, or once it’s set, it should stay at what they set?

One more thing - typically they also want to TURN OFF the “Automatically Adjust Quality” setting as well, correct? TIA

I have never seen a plex client update reset the settings, under any normal circumstances.

auto adjust only affects when transcoding is already needed.

if something starts direct play, it will stay direct play and buffer if the connection is too slow.

that said, I only use auto adjust on mobile connections, or users that have slower/crappy internet.

1 Like

Please forgive my dropping by?

Using an SSD for transcoder cache is, forgive me, wasteful.

The typical HDD can transfer 150-200 MB/sec on SATA-3.
The translates to effectively ~1.5 Gbps of video (before adding TCP/IP overhead)
Therefore, a single HDD can more than saturate a single gigabit link which translates to more than 10x 100 Mbps videos playing simultaneously. (10 simultaneous startups would be the only stress)

My recommendation for SSD use is to put PMS’s Library on it.
The database and metadata greatly benefit from the zero latency afforded by a SSD.

SSD transfer rates, unless NVME, are the same SATA-3 interface speed and max out at 535 MB/sec.

Sounds good - thanks a lot TeknoJunky!

As for your comments Chuck - I do have the Plex Data on its own SSD as well. So my setup is:
C: = OS
D: = 32 Gig Ram Drive from memory
E: = Media in Stablebit Drivepool, 5400RPM drives (mostly 8 and 10TB)
P: = PlexData on SSD

So a drivepool with 12 HDDs (101 TB) all connected inside a 4U 12 Bay server chassis should never get saturated, correct?

Going to just leave the RamDrive going for now as it doesnt get even close to using up all available space (just upped it to 32 gigs and dynamically expanding)

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.