Reminder since it's a pair of SSDs and most people will probably look into using this for their DB store: If you use current generation controllers/software & SSDs, you're going to have a bad time if you turn on RAID and don't know exactly what you're doing.
TRIM ( https://en.wikipedia.org/wiki/TRIM ) isn't supported with RAID on SSD today on hardware controllers and most distributions of linux don't support TRIM on RAID out of the box if you're doing software RAID, so you're going to see performance plummet like a rock after you do one pass of writes on the disk. In many RAID configurations, you're going to zero-write the entire disk when formatting it, so performance is going to suck from the get-go. For this reason, even if you have a tiny database and don't expect to write 1TB worth of data, your performance might still suck. Personally, I haven't tried linux software md TRIM in production, the patch is pretty recent, so you're on your own here (if possible, scaling out horizontally may be a solution to consider for redundancy, I have no idea what Amazon using for SSDs, but recent Sandforce generations fail all the time, so plan for that).
If you don't know to look for this issue, you're going to be scratching your head when your RAID10 SSD configuration write throughput is worse than a single 7200rpm drive. On the other hand, IOPS on SSDs are AMAZING for databases/datastores. Amazon may have solved this for you already behind their visualization instance, and they might be running their own software striping behind whatever raid you're doing, so be sure to test it out fully first.
Don't assume that its actually a pair of SSDs under the hood. It's very likely that its actually ~8 SSDs md/LVM'd together as two volumes(JBOD, no raid). I'm pretty sure they've taken steps to ensure that TRIM is working, as they'd burn through drives/complaints too quickly otherwise.
That said, you're absolutely right about being cautious/not RAIDing the volumes. There are almost no RAID configurations that support TRIM at present, so it's definitely not a good idea to be RAIDing up these drives. Just go JBOD.
What's a good way to utilize the two disks for your datastore? Run mongodb for example one disk and backup to the other? Run one sharded instance on one disk and another sharded instance on the other disk?
With MySQL (and Oracle before that) it was common to simply move different parts of the database data files to different disks. I don't use Mongo so I can't speak to that, but the concept works pretty much universally. See here for more information about spreading your database around multiple disks: http://www.mysqlperformanceblog.com/2010/12/25/spreading-ibd...
In addition, keep in mind that MD (software raid) does not support discards. In contrast, the logical volume manager (LVM) and the device-mapper (DM) targets that LVM uses do support discards. The only DM targets that do not support discards are dm-snapshot, dm-crypt, and dm-raid45. Discard support for the dm-mirror was added in Red Hat Enterprise Linux 6.1.
Red Hat also warns that software RAID levels 1, 4, 5, and 6 are not recommended for use on SSDs. During the initialization stage of these RAID levels, some RAID management utilities (such as mdadm) write to all of the blocks on the storage device to ensure that checksums operate properly. This will cause the performance of the SSD to degrade quickly.
TRIM ( https://en.wikipedia.org/wiki/TRIM ) isn't supported with RAID on SSD today on hardware controllers and most distributions of linux don't support TRIM on RAID out of the box if you're doing software RAID, so you're going to see performance plummet like a rock after you do one pass of writes on the disk. In many RAID configurations, you're going to zero-write the entire disk when formatting it, so performance is going to suck from the get-go. For this reason, even if you have a tiny database and don't expect to write 1TB worth of data, your performance might still suck. Personally, I haven't tried linux software md TRIM in production, the patch is pretty recent, so you're on your own here (if possible, scaling out horizontally may be a solution to consider for redundancy, I have no idea what Amazon using for SSDs, but recent Sandforce generations fail all the time, so plan for that).
If you don't know to look for this issue, you're going to be scratching your head when your RAID10 SSD configuration write throughput is worse than a single 7200rpm drive. On the other hand, IOPS on SSDs are AMAZING for databases/datastores. Amazon may have solved this for you already behind their visualization instance, and they might be running their own software striping behind whatever raid you're doing, so be sure to test it out fully first.