I have a wiki running off of Mountain Lion Server that is open to the public. Will web crawlers index the wiki? And if so, how do I go about preventing this? I know I can use a robot.txt file, but where do I put it? Thanks in advance.
Since the pages for the wikis are generated content, there is no support for robots.txt. Also, I don't believe there is anything in the wiki system itself to block them. In essence, if it's public, it can get crawled.