While the traditional photographic print or negative is a 'real' tactile object, the digital image is just a series of ones and zeros stored on computer media. This document explains how the digital data is used to display or print a digital image.

Have you ever seen one?
Most probably not, a digital image is simply a line of '0's and '1's on your hard disk like any other data-file. What we see is only a representation of that digital image shown to us by the output device of our choice and they are almost all analogue devices.

All print and all computer monitors are analogue devices (except a few LCD devices which are truly digital) and we are not seeing the digital image but a "representation" of it!

This is a key point in Digital Imaging.

Remember that your digital image is a 'virtual image' and does not exist in the real world and you will start to make sense of many imaging issues.

How do I work out the size of a digital image?
If it doesn't exist in the real world......it can't have a size in the real world. However we do know how many '0's and '1's it is made out of and how many pixels that represents. From this we can work out its file size and its virtual size, but not its real world size.

What is a pixel?
A pixel (short for picture element) is the smallest unit of a digital image. Pixels are arranged on a horizontal and vertical grid and are normally only visible when the image is magnified.

How big is a pixel then?
Again, pixels don't have any 'size' as such in a digital image. When we make a scan, the input device separates (or digitises) the 'real world' space into a number of pixels. Then when we output the digital image file we use those pixels to map to a certain 'real world' size. However the pixel never has a size until it is represented by an output device; a printer or screen.

How big will it be when it is output?
The real world size of your image will be dependent on the number of pixels in it and how many are used within each spatial unit of the output device. Generally speaking the number of pixels divided by the resolution of the output device will give you the 'real world' size of your image.

The 'resolution' and 'real world' size is recorded in the header of your digital file to give your image optimisation program a guide to what size it might take in a 'real world' context.

For any digital file size there is a range of possible qualities and sizes. You could output the file so that it appears very small with high quality or large with low quality but it will still be the same digital image with the same file-size.

But I always scan at 300ppi, isn't that good quality?
Quoting a scanning resolution as evidence of quality is about as much use as giving someone your weight in 'pounds per square inch' with no indication of the size of your feet. It is the same with digital images; unless we know the size of the image that is being scanned at 300ppi we are unable to work out how big the image will be, either as a digital image or as 'real world' image (this size of course would be totally dependent on the resolution of the printer or screen).

So what is the best way of describing the size of a digital image?
Don't mention "dpi", "ppi" or "lpi". Just give the pixel dimensions of the image (file-size). From this we can work out how large the image will be for our use.

Give me an example?
I have an image that is 1000 × 800 pixels

Using this formula:
No. of pixels ÷ Output resolution = Output size

• On an old 72ppi monitor it will view at 13.9 × 11.1 inches
• On a new 96ppi monitor it will view at 10.4 × 8.3 inches
• On an average inkjet (150lpi) it will print at 6.6 × 5.3 inches
• On a high quality printer (250lpi) it will print at 4 × 3.2 inches

You would need to scan at slightly higher resolutions than this to create prints of these sizes. It is normal to multiply by a 'Halftone Factor' (or 'H Factor') of about 1.5× to 2.0× to give an excess of data and to avoid 1:1 mapping.

###### Toolkits and Guides
Toolkits and Guides