It depends what transcoding you're wanting to speed up. Many transcoding operations are *very* CPU-limited, meaning the CPU is the bottleneck, not any other part of the system. Granted, this is more true for video than it is for music, but for the most part you could probably speed up your storage system and find it wouldn't matter one bit.
It also might not make any sense to speed up the transcoding, if Plex's on the fly transcoding we're talking about. For example, if your server can transcode video at 2x realtime (say), it can already keep up with the demands of any single media player. Being able to transcode a single stream at 4x real time makes no difference whatsoever.
Sequential transfer rates sound really impressive for RAM disks, but again they're pointless for this sort of application. The bitrate of a raw CD stream is 1440 kbits/sec, or 180KB/sec. Any compressed 2-channel, 16-bit 44kHz audio will by definition be less than that. Your SSD can stream that data at least a *million* times faster than that; what's the point of a RAM disk at that point?
Bluray's maximum bitrate is something in the region of 60Mbit/sec, or call it 7.5MB/sec. That's a paltry data rate for a modern computer to keep up with, and again by definition any compressed media will have a lower rate. A cheap software RAID array of mechanical disks can easily outdo that by a factor of 10 or more and your SSD is 33 times faster. For transcoding, there's no point in being able to shovel the data at the CPU any faster as it simply can't keep up.
RAM disks are a pain in the bum, to be honest. They excel as temporary scratch space for working on data with next to no access time hit, but they're hopeless otherwise. As soon as you shut the machine down, the RAM disk's contents are lost, and have to be repopulated the next time the system starts up.
IMHO, if you wanted to make use of the extra RAM on your home server, a better approach might be to get into virtualisation.
I have a Vsphere whitebox at home ("whitebox" is the term given to a server that's built from comparatively cheap commodity hardware rather than anything that's expensive and on VMware's hardware compatibility lists). It's quad-core Ivy bridge (i5-3470) system with 16GB of RAM, a boot/datastore SSD and a handful of 3TB drives; it runs the free version of Vsphere 5.1 (register with VMware and they give you a key to turn the trial version into something permanent), but it could quite easily be built on top of Microsoft's Hyper-V or something open-source like Xen. I chose Vsphere as that's what I wanted to learn at the time.
On there I run an instance of FreeNAS (with the onboard SATA controller handed over to that VM), several instances of Ubuntu 12.04 for various purposes (including one that runs Plex server), an instance of WinXP for a couple of bits and bobs I'm playing with that are Windows-only and a small cluster of virtual machines for work.
Doing things this way means I've been free to pick and choose between functions. I like ZFS as a filesystem, so something BSD-based made a lot of sense and FreeNAS was simple to get up and running. However, a lot of other software runs on Linux; if I'd dedicated the entire machine to FreeNAS, I would have been out of luck in running any of that.
Another nice side-effect is the segregation between functions. For example, nothing I do to the VM running Plex can affect my FreeNAS install; I can't bork my data just because I much up something in Ubuntu. Similarly, I can feel free to fire up new VMs for work or whatever that I can trash if I make a complete mess of them, and it affects nothing else.
The hardware cost no more than a pre-built Atom NAS box, consumes about the same amount of idle power (Ivy Bridge idles at single-digit Watts; it's damn impressive) and offers far more flexibility in what I can do with the system. Instead of hoping everything plays nicely with one another under a single OS, I have over a dozen servers up and running that all do their own thing. The main cost over a NAS is my time; I had to spend the time to set everything up and so forth.
The reason RAM is important is that every instance of a VM incurs some overhead. I'm running many instances of different (and sometimes the same) operating system, which isn't free. However, RAM is cheap, and ~100MB or so per instance of (say) Ubuntu server is something I'd planned for by building with 16GB.
The problem with virtualisation is it really has to be planned from the beginning. It's hard to take a machine that already has data on it and turn it into a VM factory (where does the data go in the meantime?). It's also hard to take just any old machine and turn it into a Vsphere system; there are certain hardware features that are very desirable for running Vsphere, and not all processors, chipsets and motherboards support them.
Like I said, it cost me some time to set up, but it's opened up a world of possibilities that I wouldn't otherwise have.