It worked pretty well, but the write speed was probably about 1/4 to 1/5 that of a local database. We've sense moved to log-shipping as a replication model.
We do still use DRBD for some replicated NFS shares, and it's proven to be pretty trouble-free. (Knock on wood.)
Let me ask you the following re-factored version of your question and see if it illustrates the value of DRBD:
> What is the advantage of "network I/O" over I/O? It seems to me that file copying is best left below the OS.
Basically, DRBD gives you RAID-like replication of data across multiple physical hosts. This makes deployment of high-availability services stupidly easy, if sometimes less efficient than applicaiton-level replication protocols.
For mail spools, NFS shares, or other simple file-based systems, though, it works pretty well.
The description of DRBD as software RAID is a bit misleading, since in most deployment scenarios, only the "primary" node of the pair can actually write data to the replicated volume.
It's real use is in building high-availability services -- since the "mirrored" disks aren't in the same box, you can have an up-to-the-second copy of the data there. The two systems don't even have to be in the rack next to each other, which is a tough trick to pull off with normal hardware RAID.
As I said in my other comment, though, it is a lot slower than native disk writes.
It worked pretty well, but the write speed was probably about 1/4 to 1/5 that of a local database. We've sense moved to log-shipping as a replication model.
We do still use DRBD for some replicated NFS shares, and it's proven to be pretty trouble-free. (Knock on wood.)