Ideally, I'd like to have two OS X Servers running in different geographic locations (and thus on different public IP addressing schemes) so I would imagine that DNS may be involved.
You're opening up a whole can of worms. Are you ready for this? 🙂
Let's start with the easy one - web service.
The first issue is one of content replication.
Is your web content static? i.e. a series of static .html pages and images? or is it dynamic (uses database-driven content, for example)?
If it's static content, then a periodic
rsync might be sufficient. If it's dynamic, though, you have a whole other set of issues to deal with - how to replicate your data to the second site and how you manage fallback (when you want to go back to the main site)
Some database engines (such as MySQL) include replication technology which might be sufficient for you, but you'll need some MySQL skills to set it up.
Next comes mail. For the most part here I recommend evaluating how long you expect to run in the secondary site. If it's a number of hours then don't bother.
The reason I say this is because the cost of fallback is high here. If your main mail server is at location A and you failover to location B you have all kinds of issues in synching mailboxes (messages that came into user X's mailbox on server B need to be merged into his mailbox on server A - a file-based sync is not sufficient).
SMTP (the mail transport protocol) has significant fault-tolerances built-in. If your server is offline for several hours remote mail servers will just queue up the messages, retrying periodically until the mail goes through or a timeout expires (typically 3 days).
Even if that isn't sufficient for you, you can setup multiple MX records in DNS, each with a different priority. Now your mail server in location B can accept mail for your domain but only on a store-and-forward basis (it holds the mail indefinitely, until server A comes back online). In this way you don't have the issue of mail being filtered into mailboxes that need to be synched.
This does mean that your users won't
get mail for a while (until either server A comes back up, or you kick server B into full mail server mode instead of store-and-forward), but at least no mail will bounce.
Once you have those elements worked out the actual failover process typically involves changing DNS zone data to include your failover addresses (unless you're running some fancy load balancing option that handles this in real-time). How long it takes people to get the failover addresses and start using the failover server depends on many factors, not least the TTL on your zone data.
If your TTL is 24 hours, for example, then anyone who looked up your address in the previous 24 hours will use their cached response (server A) and not notice the failover address until the TTL expires.
Most people get around this by using a lower TTL. This means you get more hits to your DNS server since people aren't caching replies as much. You need to balance TTLs and server load against failover time.