5 Replies Latest reply: Jan 18, 2006 4:34 PM by David_x
Another Justin Hill Level 1 Level 1 (10 points)
I set up my customer's servers in what I guess is fairly textbook fashion: Ethernet 1 connected to the DSL via a Cisco 828 router, fixed public IP; Ethernet 2 connected to the LAN, fixed private IP; DHCP, NAT, etc. The DNS server is running as a secondary, with the primary out there somewhere on the ISP's servers. So, the one machine is running file and print, mail, web (for webmail) and DNS secondary.

From time to time the internet connection goes down - like today there's an exchange fault and the whole area is out of action. When this happens the server slows to a crawl, taking ages to log local users in, serve files etc. As usual it's happened when I'm not on site so I can't 'see' what the machine's doing. However, since so much of the machine is based on its host name, and that host name isn't resolvable without the internet up, I'm wondering if it would be a better setup to have primary DNS for the domain hosted on the machine itself.

The ISP advises against it (not for any good reason; they just think it would be unreliable and complicated to maintain). The last time I handled prinmary DNS was using MacDNS with AppleShare IP so I'm not qualified to judge.

The major convenience I can see if I did this is that laptop users (whose afp and email account settings etc all use the hostname rather than a local IP address) would still be able to access the server locally when the internet was down. Right now, if DNS goes away they can't log in. I don't really understand why the desktop machines (which use the local 192.168..... address for everything) are not able to carry on as normal. I'm assuming it's just that the server itself is having some sort of panic attack because it 'knows' itself by a hostname that it can no longer resolve.

The other side of all this is: how much of a headache am I taking on if I take one the primary hosting for their zone files (note plural; there are 2 domains), especially beariing in mind that their 'real' web site is hosted elsewhere?

G4 and Xserve, Mac OS X (10.4)
  • Another Justin Hill Level 1 Level 1 (10 points)
    Replying to my own post? Whatever next?

    Searching through the discussions, I found a reference to

    http://www.afp548.com/article.php?story=bestpractices-dns&query=DNS

    - which excellent article seems to answer my question. Testing on my own server, I replaced the secondary zone definition with a primary one. As the article predicts, this is confusing to a dns-newbie, because I'm defining mail.classic-keyboard.com as being at 192.168.4.100 when actually it's at 212.18.226.180, however everything seems to work (after restarting the server; restarting individual services in SA just doesn't cut it).

    But don't give up on this thread yet. There's still a chance to educate me. My setup has two ethernet cards in the server, one connected to the LAN and the other connected to an ADSL modem. I've done a tweak in the firewall to redirect to ppp0, which I know isn't officially supported by Apple but gives me the nearest approximation to the setups I'm supporting with me real-DSL customers.

    My nagging doubt: in system prefs>network, the card connected to the ADSL modem has its DNS set to the provider's servers; does this mean that the server as a whole will still be trying to resolve itself using these DNS servers - which means chaos when the internet is down? Should I set the DNS address for the 'public' interface to be 192.168.4.100 (the address of the private card)? I'm going to try this, and probably answer my own question again in a minute.

    never mind - there's bound to be someone stupider than me who'll learn something from all this.

    G4 and Xserve   Mac OS X (10.4)  
  • Another Justin Hill Level 1 Level 1 (10 points)
    well, despite the apparent illogic of it, setting both the 'internal' and 'external' lan cards to use the internal address for their DNS resolution, the thing seems to work. I've mailed, web'd, mounted volumes and even run the dreaded accounts software (which is notoriously tricky about this sort of change) and everything seems to work. Now, I need to make the final test - pulling out the internet connection and checking that everything still works as it should. Assuming that comes up ok, I'll make the mods on the customers system tomorrow.

    The firewall is all over the place. The only way I can get everything to work (not just in this new config) is by opening everything on all address ranges. The config files and/or SA seem to be corrupt. Making changes like closing ports or adding/removing advanced rules either revert or end up different after a reload. But that's another story.

    Input from experts still gratefully received...

    G4 and Xserve   Mac OS X (10.4)  
  • davidh Level 4 Level 4 (1,890 points)
    One piece of advice: don't try to make your Xserve do NAT (two NICS) when a dedicated hardware box (I'd recommend a Zyxel Zywall, but if you must, even a cheapo Linksys box) can do it better.

    Go out and buy DNS & Bind by Albitz & Liu
    http://www.oreilly.com/catalog/dns4/

    Liu is something of an known expert on this topic, you'll find some helpful info here:
    http://www.menandmice.com/9000/9310DNS_CornerQuestions.html

    Good luck !
  • Another Justin Hill Level 1 Level 1 (10 points)
    Yeeeesssss..... well I bow to your superior knowledge, David. Thing is, having battled with the typical 'prosumer' all-in-ones from Draytek, D-Link, Belkin etc. I rather fancied the degree of control, flexibility and elegance that OSX Server would allow. This daydream has been shattered somewhat by learning the lmitations of the SA GUI (and it gets shattered a bit more every day), but my customers certainly get attracted to OSX Server by its 'one box solution' image. Why (if you've got the time to discuss it) do you say that all this supposedly world-class Unix-based DHCP, NAT, Firewall etc is not as good as a hardware box with a nasty through-the-letter-box web configurator? Serious question - I'm keen to learn.

    Moving on (and maybe I should start another thread), here's what happened when I visited the site in question today. The DSL line had been fixed by the time I got there but the server didn't know this. It was was still running in stuck-in-glue mode and I stared at every log I could find trying to see what was pulling it down. I couldn't find anything other than the obvious stuff like SMTP outgoing failures and DNS lookup errors.

    The first thing I did then was to restart the Cisco router. No change. Then I discovered that the server couldn't be pinged from another machine attached directly to the router, nor the other way around. To cut a long story short the ethernet was dead. Pulling the plug out and plugging it back into another port didn't help (although all the relevant lights lit up). Is there a command line way of truly 're-booting' a network card? I couldn't find one. I tried to fake it by fooling with the card settings (changing from auto to 10Mb etc) and saw the expected 'link down, link up' messages in system.log but still no response on incoming or outgoing pings. In the end I rebooted the server and (obviously) everything came back up fine. I hate not knowing why something happened.

    The issue here is - how can I set the server so that loss of the router and/or internet connectivity doesn't hurt it and restoration of the internet happens seamlessly without requiring intervention? As all of us know, restarting servers with 20 busy architects chasing deadlines attached doesn't go down well. I've implemented the stuff discussed above on this server; the stern faces on the users prevents me from testing whether its makes any difference to the way it behaves during an outage.

    I have started taking advantage of a 'local' primary DNS by adding A records for fixed IP local devices like printers and I'm hoping that I may be able to improve printing reliability (or at least logging!) - but I haven't thought through the possibilities yet.

    Oh by the way I've got the DNS & Bind book, it's one of the first books I bought when AppleShare IP went away. It scared the heck out of me. With MacDNS, I'd had perfectly nice authorative servers for customer's domains running over their leased lines in 1997, with the world happily finding web, mail and file services and 'big' servers obediently transferrring from my machines. Finding myself with OSX Server 10.1 and that book, I ran away from DNS altogether for several years.

    Long post - boring - yes I know. But these must be lots of people like me who have been tending people's networks for years but aren't dyed in the wool hackers with lines of C tattoed on their foreheads. I get tired when I see reply posts from experts that basically say 'we can see you're a total amateur so go to school and come back when you know what [insert given technology] actually does'. I've been doing stuff for customers for 15 years and all my clients are people who chose Mac because they neither know nor care what operating systems do and just want to draw houses (or make records, or publish artists or...) and I've kept them happy - by continual learning, asking, reading, and sometimes good old-fashioned poking the thing with a stick until it gets right. I really appreciate forums like this.
  • David_x Level 4 Level 4 (3,010 points)
    My own experience of NAT failures on the server, with 2 NICs, was that the NAT config in the GUI dropped the one nic and had to be reset in the nat services gui. However, I only have my home (test) server (OD Master) like that now and since changing to 10.4 I can pull out the WAN cable without anything untoward happening. I do run my own local DNS which resolves the servers hostname to it's local static IP (and reverse). I don't know exactly when the nat failures stopped happening though so cannot say for sure what might have helped it.

    I now no longer would suggest to anyone that they use the server for nat (or even firewall), particularly since the alternative 'box' is so cheap. I like separating my functions as much as possible - at least no-one complains about not being able to browse the internet if the server has to go down!

    And if it prevents the server complaining if the WAN link goes down - then all the better surely.

    I do keep DHCP to the server, though, as the additional info which can be handed out at the same time as an IP is very useful.

    -david

    PS. nice stream of consciousness in your last post