Programs like SiteSucker (I've never actually used that one, but there are lots like that and some are freeware) are good for static content, but many sites will detect use of such a 'bot program and either shut you down or are not easily navigable with such software, especially if you want the code behind the pages (you'll get the rendered HTML, but not the MySQL/PHP and/or other server-side scripting that is used to deliver it). So it really depends on what you want to download. Some systems will help you harvest all PDFs on a site, provided they can be navigated by a 'bot -- or all images, etc. Experiment and see what works for you. It really depends on the site (and your needs) how well any product will work for this.