Here's the scoop.
Early Macs used something called WYSIWYG. That is an acronym which stands for "What You See Is What You Get".
Early Macs had a screen resolution of 72 pixels per inch. Also, the early Apple
ImageWriter (a dot-matrix printer) was a 72 DPI printer. This meant that whatever you could see on your black and white Macintosh would be reproduced faithfully on your printer. Verbatim. This set (in a manner of speaking) 72 DPI as a standard for on-screen resolution.
What did this mean? Well, now that images were showing up on screens at exactly 72 DPI, it was a reliable measurement tool for images, so it became normal to create, edit, and save images at this resolution.
By today's standards, 72DPI would be horribly low quality. but think of how things looked in the mid-80's. I remember seeing MacPaint graphics in the San Jose Mercury News - they were blocky, but they looked edgy because they were computer-generated.
Fast forward to the desktop publishing revolution, and the advent of the
Apple Laserwriter. This printer was capable of printing at a whopping 300 DPI! (WOW!!!) This printer allowed users to print text and graphics at MUCH higher quality than the old ImageWriter, making it instantly obsolete. Kinda funny how tech works. So hard on the pocketbook!
Well, the Laserwriter made it possible to print at 300 DPI, where the old printers were at 72 DPI. this meant that you could have an image that measured 2 inches square at a resolution of 72 DPI, and it would print out at exactly 2" by 2" on the ImageWriter, but it would be less than 1" by 1" on the Laserwriter. Makes sense, right? The dots on the Laserwriter are much smaller, so the printout would be smaller to compensate.
Now technically, you could print the image on the laserwriter at 2" by 2" just by telling the software (like Photoshop) to do just that. You'd go into your trusty "Image Size" dialog in Photoshop (whatever it looked like on those days) and tell the computer to force the image to be 2" square when it prints. Assuming that your computer and your printer are both in agreement on what an inch is (most are by default), you should be in hog heaven. the printer would just double up its dots and give you a printout that is the same size as the old ImageWriter's printout. Both would look similar, but the Laserwriter would be cleaner because it is a laser printer, and not a dot matrix.
So let's apply this to your current equipment. Most digital cameras capture their images at 72 DPI by default. So when you load them into iPhoto, Aperture, or Photoshop, they look as if they are 72 DPI. No matter what the "megapixel" size is, you're still dealing with 72 pixels per inch. This makes an 8 megapixel image physically larger than a 5 megapixel image, right? Because it has more pixels making up its image.
Your 5 megapixel image is probably an image that is 2592 x 1944 pixels in size. My Canon 20D shoots at 8.2 megapixels, and its image are 3504 x 2332 in size. This means that if I print each of these images at 3" by 5", the 8.2 megapixel image will have more dots crammed into the image per inch. Therefore, it will look crisper and clearer.
This also helps you figure out how big you can print the images. Typically, a 5 megapixel image prints pretty well at 8" x 10". An 8.2 megapixel image prints well at up to 14 inches wide (in landscape setting). This does not compensate for image quality or focus problems, but I have found it to be a good general rule.
Im professional print, you'll se a LOT of 600DPI, 1200 DPI (magazines), and sometimes 2400DPI (for high-end art books). These numbers must be taken into consideration when laying out such publications. My 8.2 megapixel image can be no larger than three inches wide if I want to print it at 1200DPI.
This really does not mean much to all of us who use our inkjet printers, as they tend to blend colors at such high resolutions. I often print 5 megapixel images at 5" by 7", and you cannot really see the difference, the images look great framed and sitting on a hearth or bookshelf. How often do you pick up such photos and look at them close enough to see each individual dot? That being said, your results may vary.
And then there's linescreen - a term that is used in print to describe the coarseness of the dots in images. If you pick up a black and white newspaper, you'll see that the images are made up of small dots. These dots are not pixels, but are in stead called "halftones". A halftone is the term used to describe the way an image is converted to tiny dots, allowing us to print shades of grey using only black ink on white paper.
If you'll look closely at those dots, they are not square. Most are round, or have round corners. This is because ink runs just a bit on paper (especially newspaper - it's called dot gain), but it is also because the human eye perceives images more easily if you don't use squares to build them.
Well, high DPI printers (like 1200 DPI or better) are pretty good at printing halftones. Lower DPI printers are not good at it because the dots per inch of the printer can actually get in the way of the ability of the printer to use those dots to build the little roundy dots in the halftone. So a very high-resolution printer (like 2400DPI) can make VERY perfectly formed and very small shapes, so it is capable of building very high-grade halftones.
One more thing - a halftone was traditionally made using a screen on a camera. This lead to the term "linescreen" or "LPI" (lines per inch). LPI and DPI are totally different. LPI is a measurement of the density of dots in a halftone, DPI is a measurement of a printer, like an inkjet or a laser printer. Generally speaking, the two never meet unless you are using a laser printer or inkjet to create "plates" (printed master pages) for use on a high-end offset printing press.
Offset printing presses use halftones to get their images because they use ink, and ink is a less precise art than is most inkjet or laser printing technology.
It's REALLY late, and I'm rambling. I'll let you all go now.
I hope I helped!
--------------------S