I like this idea - it would remove the drive speed limitation, and it could easily be configured to set a maxmum memory usage limit so it doesn't inadvertently overload the server's RAM.
On Windows systems, I think it would be interesting to use FILE_ATTRIBUTE_TEMPORARY to create temporary files that will only be written to disk if there is memory pressure, else they'll stay in ram. This would require a change to Plex's transcoder (FFMPEG) to create its files with this set.
On *nix systems, pointing Plex to a tmpfs mount could help -- just make sure you make it large enough and have enough swap space. You could do this on your own without any changes to PMS software. For some of you, /tmp may already be tmpfs, but may not be large enough.
On OSX you could create a ram disk, but you can't overcommit (make the volume larger than you have available physical ram) like you can with tmpfs -- this could cause problems for sync transcodes.
I know that i could install RAMDisk and setup a 2 GB drive and use that as temp dir - but for everyone else who either don't want to or 'cant' it would be nicer if PMS could do it automatically.
also - there need to be a bit of throttling so that the disk doesn't run out of space... I've noticed that PMS sometimes leave segments, which take up a bit of space.
I guess the big challenge is to grab the output of FFMPEG and throw it into RAM, and then address it somehow to the webserver. its not impossible, just challenging :)
I know that i could install RAMDisk and setup a 2 GB drive and use that as temp dir - but for everyone else who either don't want to or 'cant' it would be nicer if PMS could do it automatically.
The easiest way for PMS to implement it, without heavy modification of FFMPEG, is probably what I pointed out above (FILE_ATTRIBUTE_TEMPORARY for Windows, tmpfs for *nix). Windows' FILE_ATTRIBUTE_TEMPORARY is pretty slick, but I don't think anything like it exists on linux/OSX. I'm also not sure how difficult it would be to implement FILE_ATTRIBUTE_TEMPORARY in FFMPEG, but my guess is that it would be relatively easy. tmpfs would just require a simple setup script and configuration options.
Unfortunately it doesn't help OSX users. Someone else would need to chime in there.
Super slow compared to RAM, but plenty fast enough to contain the I/O required for a few streams of transcoded streaming at once. Regardless, I wouldn't complain of the option.
Yea my 16gb of ram is largely underutilized and 3tb hard drive is super slow so +1
I would also say, in the real time transcoding it doesn't make any difference, because the disk is still faster than the playback speed of the video and therefor doesn't really limit this. I don't know who the thing is with 10 clients the same time, but then again, I don't know anybody who has 10 clients streaming (with transcode) the same time. And for those you could still use a ramdisk with the temp directory on it.
But what I think could be the greater problem is if you use an SSD. I mean, it depends on how much of the videos you have do actually have to transcode but if it is many then you would have much writing on the SSD and we all know, that an SSD doesn't have unlimited writing cycles (but still, I don't think that you can transcode that much to wreck an SSD with that before it dies a natural death so...)
You'd have to be beating up on it pretty good 24x7 for years and years to run through through the write cycles expectancy of an SSD. With a regular average client workload they should last 10+ years IIRC. I did a test one time and had 2 web clients, 2 tablets, two phones, and two instances of PHT running at once and I was fine. I need to do more scientific testing as I don't recall how many were DIrectPlay, how many were trancode, etc., but I know that while I have the upgrade itch that it doesn't really make sense for me to bother right now. The new transcoder is even faster and more efficient than it used to be so... I guess I get to sit on my current build for a while longer. I may repackage it into an HTPC case, though, as it currently lives in a SIlverStone FT02 on the floor next to the a/v stack which is kind of, erm, imposing :).
I would also say, in the real time transcoding it doesn't make any difference, because the disk is still faster than the playback speed of the video and therefor doesn't really limit this. I don't know who the thing is with 10 clients the same time, but then again, I don't know anybody who has 10 clients streaming (with transcode) the same time. And for those you could still use a ramdisk with the temp directory on it.
But what I think could be the greater problem is if you use an SSD. I mean, it depends on how much of the videos you have do actually have to transcode but if it is many then you would have much writing on the SSD and we all know, that an SSD doesn't have unlimited writing cycles (but still, I don't think that you can transcode that much to wreck an SSD with that before it dies a natural death so...)
You'd have to be beating up on it pretty good 24x7 for years and years to run through through the write cycles expectancy of an SSD.
I suppose it depends on how full the SSD is, and how much extra area that particular SSD has reserved for wear leveling.
While this seems like a simple request, it can actually get complicated quickly. A Ram disk is probably the best option for those that want to avoid wear on their SSD, else, for the monster server, a cheap, oversized, dedicated SSD for transcode cache would likely be more than enough.
Yep both very true - depends on how well the controller handles trim, spare and wear leveling. Unless you have a super busy PMS though a decent HDD should be plenty fast.