A common question on Yahoo! Answers revolves around the relationships between inches, pixels, points, dots and picas.
The short answer is, a pixel — the basic dot in a TV screen — is the same thing as a point, the printer’s smallest measure. That one fact dictates all the relationships in online design, from why it is online images are always set to 72 dpi (dots per inch) to how to convert back and forth between inches and pixels.
A pixel is a dot is a point. Remember that and everything else falls into place: Points, pixels and dots are all the same thing.
How many pixels in an inch?
In printers’ measures, there are 12 points to a pica and 6 picas in an inch. So, 12 x 6 = 72; there are 72 points in an inch. And since points and pixels are the same thing, there are 72 pixels in an inch, too.
So, if you want something on screen to be 4 inches wide: 4 x 72 = 288. An element you want to have take up 4 inches horizontally on the screen should be made 288 pixels wide.
Scale is what matters
Many other resources will tell you there are 96 pixels in an inch. And that is often true, as far as it goes.
How many pixels truly constitute one inch of screen real estate is a product of the monitor itself. A pixel is a real thing: a picture element (“pixel” is an abbreviation of that term). It is one tiny dot that the screen can make red, green or blue. When viewed from afar, each pixel blends in with all the other pixels, making a color picture.
In most CRT (cathode-ray tube) monitors, 72 pixels literally measure one inch; that is, if you line 72 CRT pixels side-by-side, they will literally measure one inch. In most LCD screens, 96 pixels measure one inch. Plasma screens also tend to have about 96 physical pixels per inch, although some plasma screens have many, many more; due to the way plasma TVs work, their physical pixels per inch count is somewhat arbitrary.
So sure, you could use 96 pixels per inch and feel pretty comfortable about the measurement being physically true.
But what matters in graphic design is scale, not a 1:1 representation of whatever is on the screen to its real-life model.
What I mean by that is, so long as the elements in a design maintain the same scale, those elements don’t need to be a 1:1 representation on screen. A 1-inch square doesn’t need to measure exactly 1 inch on the screen, provided a 2-inch square is exactly twice as large as the 1-inch square.
For a real-world example of this, consider a kindergarten classroom.
When you were little, all the furniture in the room seemed normal size. The chairs were the right size, the tables were the right size, everything seemed the right size. The teacher, however, was big. And so was the bus, and the playground.
Today, if you go to the same kindergarten classroom, all the chairs, tables and other items seem tiny.
The reason is scale. When you were little, little things didn’t seem so small, but big things seemed huge. Now that you’re bigger, big things seem normal and small things seem tiny. Your sense of scale changed as you grew up, keeping pace with you as the reference point.
The same is true of graphic design. So long as we fix the size of a given element, and make the size of all other elements in scale with that fixed size, then the design looks right.
Why are Web images 72 dpi?
You can think of resolution as the precision of the image — how sharp and detailed it is. (That’s not 100 percent accurate, but for our purposes, it’s close enough to right to be right.) The more resolution an image has, the sharper and clearer it generally appears in print.
The operative phrase in the preceding sentence is “in print.”
Different kinds of paper can reproduce images within certain ranges of resolution. For example, newsprint is poor quality, so most newspapers print images at about 120 dpi (actually, they print halftones, not images; but I digress). Your home printer generally prints at about 300 dpi for a word processing document, which is more than sufficient to ensure that letters are sharp; your home photo printer probably produces images at 600 dpi, which ensures sharp edges in small photo reproductions. Some magazines may print at about 900 dpi, and lithographs on specialty stock may be printed at 1,200 dpi or more.
But all monitors — CRT, LCD or plasma — all use the same method to produce and image: They illuminate tiny, colored dots to produce a picture. And therefore, they all have the same resolution: 72 dpi.
Why 72 dpi? Because again, to maintain the basic units of scale between print and monitor, a point is a pixel is a dot, and there are 72 points to an inch — which means there are 72 dots to an inch. If we want something that measures one inch wide to be one inch wide on the screen, we need to set its resolution to 72 dpi.
Here are some examples of the effect of different resolutions on the same image. First, a 250-pixel square at 72 dpi.
Let’s look at the same image at 150 dpi. It should appear on screen slightly more than twice the size of the square above.
And here’s the same square at 36 dpi. This time, it’s going to appear to be about half the size of the first square.
Why doesn’t everything measure inch for inch on the screen?
As noted before, not all screens render a 72 dpi image at exactly one inch wide. That’s because all screens can be set to different sizes they will render.
In computer terms, the width and height setting of the screen is called its resolution, but that is not technically true. Changing the size of the screen isn’t really changing its resolution; it’s just changing the size of the things the screen displays — the scale of the items — and therefore either increasing or decreasing the number of things that can be shown: sort of the same way you can show a lot more information on a broadsheet newspaper page than you can on a Post-It note. Computer geeks call different screen sizes “resolution” because it’s convenient to do so, but it’s not true.
The screen only comes with X number of horizontal dots and Y number of vertical dots. If you change the width and height the screen is going to show, that doesn’t increase or decrease the number of horizontal or vertical dots in the screen; it has just as many as it had when it came out of the box, regardless of your settings.
Further, unlike paper, you can’t move the dots any closer or further apart than they are on the screen.
For example, in printing, if the paper is good enough, I can print many tiny dots very close together in order to get a very sharp picture; the better the paper, the smaller the dots I can reproduce and the closer together they can be. Because the number of dots / points in an inch is the true measure of resolution, the more dots I can fit in an inch, the sharper the picture will be in print.
But with a screen, the dots are always in the same place, the same size, and can’t be moved closer together. So, the way a screen manages to handle having its width and height changed is to, simply put, change scale.
For example, suppose a specific model of computer monitor happens to render items 1 for 1 at 800 x 600; that is, if something is 1 square inch, you can put a ruler up to the screen and it will be 1 inch square, if the monitor is set for 800 x 600 pixels.
Suppose we increase the screen “resolution” to 1024 x 768 pixels. The monitor can’t add dots in order increase the amount of space it can show; it has as many dots as it started with, but it has to show more. To compensate for the added space it needs to show, the screen simply reduces the size of everything it is showing by a proportional amount. In out example, the screen would render our 1-inch square as a 0.78-inch square (1 / 1.28 = 0.78) — that is, it simply throws out 0.22 inches of the square.
The screen can get away with doing that because so long as it reduces the size of everything on screen by the same percentage, everything remains in scale — even though everything on screen is actually smaller than it truly measures, in relation to everything else on screen, it’s in proportion, so it looks correct.