Apple publishes this spares document http://support.apple.com/kb/HT4187 but I am going to try and convince you not to do this.
Unless you are looking to employ a self-service model in which the end user is imaging her machine, there really is no need to go through all this effort to provide NetInstall across the entire organization. And if self-service is the goal, then thin imaging is probably a better approach.
That being said, in most environments, the imaging process is performed once and then the machine is delivered to the end user. Frequent re-imaging, in my experience, is rare. Yes, there are the test labs were you need to reset to a default state but I've found Deep Freeze to be a better (and quicker) solution.
Next is the threat of accidental imaging. If you are providing access to the imaging server at all times, it is possible that an end user may accidentally boot into the imaging solution and there is a chance of complete data loss. Yes, this is rare, but I actually had a user leave a binder on a keyboard once and the machine booted up to the NetInstall window. Luckily, we did not configure imaging for unattended mode so no damage was done. But yes, I did panic.
So I would suggest leaving the NetInstall isolated to the subnet that the IT lab is in and where the server exists. After all, most often, the imaging process is handled by IT. If this is your model, then image in the lab and then deliver to the end user. Now you avoid unnecessary modification to the switching infrastructure for a process that should not be used that frequently.
Apple Consultants Network
Apple Professional Services
Author "Mavericks Server – Foundation Services" :: Exclusively available in Apple's iBooks Store
We do annual re-imaging of three student labs in a high school. Unlike Strontium90 we do find it very useful to re image student computers on demand. We have co-ordinated with our network folk to enable BSDP on all LAN segments at the school. It is a hassle for them to configure switches to do this. In Cisco they have to designate our Mac server as a DHCP helper. We are currently attempting to migrate from monolithic Apple imaging to thin imaging with the Casper product. But so far we are NOT finding thin imaging to be as robust. When not actively imaging we just turn off the service to prevent accidents..
We are currently attempting to migrate from monolithic Apple imaging to thin imaging with the Casper product. But so far we are NOT finding thin imaging to be as robust. When not actively imaging we just turn off the service to prevent accidents..
Some admins use an Rsync approach overnight to restore clients to a known config, I believe Apple Stores use an overnight imaging process to restore demo machines.
With regards to thin imaging, I used to use monolithic aka. 'fat' imaging but have switched to thin imaging. I had a look at Casper but I find DeployStudio to be the best solution for imaging, other peoples opinions may differ. The approach I currently follow is as follows.
- Create a virgin never booted image i.e. 'thin' image using either InstaDMG or AutoDMG
- Load resulting image on to DeployStudio repository
- Define a workflow in DeploStudio that - restores the thin image, copies preferences to User Template folder, installs local admin account, enrolls to Profile Manager, binds to Open Directory, installs packages, writes my choice of system-wide preferences, runs software update, etc., in otherwords traditional imaging workflow tasks
This works fine for me and means the resulting image is 'installed' specific to the model of Mac being imaged. No more cases of network interfaces having the wrong name etc.
There is one 'gotcha', DeployStudio itself can only auto-rename 'by-host' preference files if those files are already in the image being restored, because a 'thin' image does not have these I have to copy those preference files to the client after the image has been restored, I therefore have had to write my own script to do this renaming after I copy the files. In recognition of this issue the DeployStudio team have announced that they are going to add a new workflow command that will let you rename such files after imaging but during the workflow and hence it will no longer be necessary to use your own script to do this.
With regards to preventing undesired imaging restores, this is going to be a trade-off, if you do not store the DeployStudio runtime credentials in the netboot image, or use deliberately incorrect ones, then only someone who knows the correct credentials and manually enters them will be able to do a restore. However this would prevent a totally automated solution, it is possible for example to set machines to auto-reboot at a specific time and auto-run a restore - like presumably Apple Stores do overnight. Or as already suggested you could set your imaging server to only be online at specific times, hypothetically you could have a script to turn on DeployStudio late at night and to turn it back off in the early morning.
Strontium90: Thanks for your input. I'm afraid your approach simply wouldn't work for my company. We are an international company with Desktop Support engineers accross the nation and offices around the world. We need to be able to deploy images in a centrally managed fashion, to remote sites.
For example, in a scenario where a user in America needs their machine reimaged for what ever reason, it would be impracticle to send it all the way over here to be imaged, or for us to send them a replacement machine. We also lack the time, at this point, to deploy a server for imaging purposes to various sites, as I am essentially the only person in the entire company capable of doing so. So I would be required to bend the laws of physics in order to travel and work at each of the sites, unless I could clone myself somehow...(goes off to the mad scientist lab to carry out work).
John Lockwood: You're definately thinking more along our lines. Since I haven't got time to explore the various potential solutions for imaging, I've arranged for a third party to draw up a Statement of Work, that may include the implementation of a Deploy Studio solution. In the mean time, I have no choice but to assume that they won't be able to deliver (I'm so short on time that I must try to prepare for all eventuallities as best I can).
However, while the replies in this thread are all helpful and enlightening (I truly appreciate them), they are some what divergent from the question I originally posed, which is how do we reconfigure our network for BSDP?
This thread has raised another question though; does Deploy Studio require a different network configuration, or does it utilise the same BSDP options?
DeployStudio uses the same underlying NetBoot technology and therefore the answer is yes it would use BSDP.
With regards to a geographically dispersed requirement, DeployStudio supports a master/slave configuration. You would have a master setup at HQ where you create all the settings etc, and it would auto-sync to the slaves which act as local points from which clients can image themselves. You could setup DeployStudio slave Mac minis and ship them out to remote locations.