Apple publishes this spares document http://support.apple.com/kb/HT4187 but I am going to try and convince you not to do this.
Unless you are looking to employ a self-service model in which the end user is imaging her machine, there really is no need to go through all this effort to provide NetInstall across the entire organization. And if self-service is the goal, then thin imaging is probably a better approach.
That being said, in most environments, the imaging process is performed once and then the machine is delivered to the end user. Frequent re-imaging, in my experience, is rare. Yes, there are the test labs were you need to reset to a default state but I've found Deep Freeze to be a better (and quicker) solution.
Next is the threat of accidental imaging. If you are providing access to the imaging server at all times, it is possible that an end user may accidentally boot into the imaging solution and there is a chance of complete data loss. Yes, this is rare, but I actually had a user leave a binder on a keyboard once and the machine booted up to the NetInstall window. Luckily, we did not configure imaging for unattended mode so no damage was done. But yes, I did panic.
So I would suggest leaving the NetInstall isolated to the subnet that the IT lab is in and where the server exists. After all, most often, the imaging process is handled by IT. If this is your model, then image in the lab and then deliver to the end user. Now you avoid unnecessary modification to the switching infrastructure for a process that should not be used that frequently.
R-
Apple Consultants Network
Apple Professional Services
Author "Mavericks Server – Foundation Services" :: Exclusively available in Apple's iBooks Store