With R2, DFS replication uses what is called Remote
Differential Compression (RDC) which will only update changes to files and will
not send the entire file across the wire. This is especially handy when
replicating across a wide area network, but it is also good for this situation.
If you set up two or more folder targets using DFS
Management, the wizard should have asked you if you want to set up replication,
but if you did things in a different order, you can set it up manually after
the fact. This can be done using the DFS Management tool as well.
Changes to the servers are not immediate so DFS does not
work well for transactional type data where both servers need to be 100% in
sync within a couple seconds of each other. However, for a website related
situation that is mostly read intensive, DFS works great.
You have a few options, but in our situation we will use the
Full mesh which means that any server will write to any other server. This
means that in a failure situation, the content changes made on the backup
server will push back to the primary server when it is online again.
How Good Is It?
DFS failovers are impressive. If the primary content server
becomes unavailable, DFS will fail over to the backup content server in a small
number of seconds. In this webfarm situation almost every time that the
primary server fails, the HTTP protocol will retry for a few seconds until IIS
is able to serve up a successful page.
This means that there is zero downtime if the primary
content server fails. The only issue I ran into when testing is if the page
load was 1/2 done when the primary server failed using master pages or web
controls. It could potentially process 1/2 of an ASP.NET page and fail
processing the rest. This is pretty rare and I would say that the failover is
as close to perfect as can be.
A failure of the namespace server is even smoother,
resulting in no noticeable downtime or slowness.