TV  video Camcorder DigiCam Media Audio Theater Help companies  
         
 Columbia  
 ISA  
 


Home

Glossary

Selections

Checklist

Definitions

 

 

Introduction Links Formats Features
l l Media Models

Consumer Digital Camera Definitions

What is a digital camera?

A camera that stores images digitally rather than recording them on film. Once a digital  picture has been taken, it can be downloaded to a computer system, and then manipulated with a graphics program and printed. Digital photos are limited by the amount of memory in the camera, the optical resolution of the digitizing mechanism, and, finally, by the resolution of the final output device.

The big advantage of digital cameras is that making photos is both inexpensive and fast because there is no film processing. Interestingly, one of the biggest boosters of digital photography is Kodak, the largest producer of film.

How does a digital camera work? 

What is a CCD?

Most digital cameras use CCDs to capture images. Although the principle may be the same as a film camera, the inner workings of a digital camera are quite different, the imaging being performed either by a charge coupled device (CCD) or CMOS (complementary metal-oxide semiconductor) sensors. Each sensor element converts light into a voltage proportional to the brightness which is passed into an analog-to-digital converter (ADC) which translates the fluctuations of the CCD into discrete binary code. The digital output of the ADC is sent to a digital signal processor (DSP) which adjusts contrast and detail, and compresses the image before sending it to the storage medium. The brighter the light, the higher the voltage and the brighter the resulting computer pixel. The more elements, the higher the resolution, and the greater the detail that can be captured.

The CCD is the technology at the heart of most digital cameras, and replaces both the shutter and film found in conventional cameras. It's origins lie in 1960s, when the hunt was on for inexpensive, mass-producible memory solutions. It's eventual application as an image-capture device hadn't even occurred to the scientists working with the technology initially.

At Bell Labs in 1969, Willard Boyle and George Smith came up with the CCD as a way to store data. The first imaging CCD, with a format of 100x100 pixels, was created in 1974 by Fairchild Electronics. By the following year the device was being used in TV cameras for commercial broadcasts and soon became commonplace in telescopes and medical imaging systems. It was some time later before the CCD became part of the high-street technology that is now the digital camera.

It works like an electronic version of a human eye. Each CCD consists of millions of cells known as photosites or photodiodes. These are essentially light-collecting wells that convert optical information into an electric charge. When light particles known as photons enter the silicon body of the photosite, they provide enough energy for negatively-charged electrons to be released. The more light that enters the photosite, the more free electrons are available. Each photosite has an electrical contact attached to it, and when a voltage is applied to this the silicon below each photosite becomes receptive to the freed electrons and acts as a container for them. Thus, each photosite has a particular charge associated with it - the greater the charge, the brighter the intensity of the associated pixel.

The next stage in the process passes this current to what's known as a read-out register. As the charges enter and then exit the read-out register they're deleted and, since the charge in each row is "coupled" to the next, this has the effect of dragging the next in behind it. The signals are then passed - as free of signal noise as possible - to an amplifier and thence on to the ADC.

The photosites on a CCD actually respond only to light - not to color. Color is added to the image by means of red, green and blue filters placed over each pixel. As the CCD mimics the human eye, the ration of green filters to that of red and blue is two to one. This is because the human eye is most sensitive to yellow-green light. As a pixel can only represent one color, the true color is made by averaging the light intensity of the pixels around it - a process known as color interpolation.

Recognizing a glass ceiling in the conventional charge-coupled device (CCD) design, Fujifilm have developed a new, radically different CCD with larger, octagonal-shaped photosites situated on 45-degree angles in place of the standard square shape. This new arrangement is aimed at avoiding the signal noise that has previously placed limits on the densities of photosites on a CCD, providing improved color reproduction, a wider dynamic range and increased sensitivity, all attributes that result in sharper, more colorful digital images.

What is CMOS?

Complimentary Metal Oxide Semiconductor or CMOS sensors emerged in 1998 as an alternative image capture technology to CCDs. The CMOS manufacturing processes are the same as those used to produce millions of processors and memory chips worldwide. As these are established high-yield techniques with an existing infrastructure already in place, CMOS chips are significantly less expensive to fabricate than specialist CCDs. Another advantage is that they have significantly lower power requirements than CCDs. Furthermore, while CCDs have the single function of registering where light falls on each of the hundreds of thousands of sampling points, CMOS can be loaded with a host of other tasks - such as analog-to-digital conversion, load signal processing, handling white balance and camera controls, and more. It's also possible to increase CMOS density and bit depth without bumping up the cost.

For these and other reasons, many industry analysts believe that eventually, almost all entry-level digital cameras will be CMOS-based and that only midrange and high-end units will use CCDs. Problems remain to be solved - such as noisy images and an inability to capture motion correctly - and at the start of the new millennium CMOS technology clearly had a way to go before reaching parity with CCD technology.

However, it's prospects were given a major boost in late 2000 when Silicon Valley-based Foveon Inc. announced it's revolutionary X3 technology and the manufacture of a CMOS image sensor with about 3 times the resolution of any previously announced photographic CMOS image sensor and more than 50 times the resolution of the most commonly manufactured CMOS image sensors low-end consumer digital cameras of the time.

Before, CMOS image sensors had been manufactured using 0.35 or 0.50 micron process technology, and it had been generally accepted that 0.25 represented the next round of product offerings. Foveon's 16.8 million pixel (4096x4096) sensor is the first image sensor of any size to be manufactured with 0.18 micron process technology - a proprietary analog  CMOS fabrication process developed in collaboration with National Semiconductor Corporation - and represents a two generation leap ahead of the CMOS imager industry. The use of 0.18 micron processing enables more pixels to be packed into a given physical area, resulting in a higher resolution sensor. Transistors made with the 0.18 micron process are smaller and therefore do not take up as much of the sensor space, which can be used instead for light detection. This space efficiency enables sensor designs that have smarter pixels which can provide new capabilities during the exposure without sacrificing light sensitivity.

Comprising nearly 70 million transistors, the 4096x4096 sensor measures 22mm x 22mm and has an estimated ISO speed of 100 with a dynamic range of 10 stops. In the 18 months following its release the sensor is expected to be seen in products for the high-quality professional markets - including professional cameras, film scanners, medical imaging, document scanning and museum archiving. In the longer term, it is anticipated that the sensor's underlying technology will migrate down to the larger, consumer markets.

Picture quality

The picture quality of a digital camera depends on several factors, including the optical quality of the lens and image-capture chip, compression algorithms, and other components. However, the most important determinant of image quality is the resolution of the CCD. The more elements, the higher the resolution, and thus the greater the detail that can be captured.

In 1997 the typical native resolution of consumer digital cameras was 640x480 pixels. A year later as manufacturing techniques improved and technology progressed the emergence of "megapixel" cameras meant that the same money could buy a 1024x768 or even a 1280x960 model. By early 1999, resolutions were as high as 1536x1024 and before the middle of that year the two megapixel barrier had been breached, with the arrival of 2.3 million CCDs supporting resolutions of 1800x1200. A year later the unrelenting march of the megapixels saw the three megapixel barrier breached, with the advent of 3.34 megapixel CCDs capable of delivering a maximum image size of 2048x1536 pixels. The first consumer model 4 megapixel camera appeared in mid-2001, boasting a maximum image size of 2240x1680 pixels. 2003 saw 5 megapixel consumer digital cameras and then 8 megapixel cameras.

At this level, raw resolution is arguably little more than a numbers game and secondary to a digital camera's other quality factors. One of these - and almost as important to the quality of the final image as the amount of information the CCD is capable of capturing in the first place - is how cleanly the information is passed to the ADC.

The quality of a CCD's color management process is another important factor and one of the prime reasons for differences in the output of cameras with the same pixel count CCD. The process should not be confused with the interpolation method used by some manufacturers to achieve bitmap files with a resolution greater than their true optical resolution (the resolution of their CCD array). This method - more accurately referred to as resampling - adds pixels using information already present, and although it increases the effective resolution, it does so at the cost of a reduction in sharpness and contrast. It works by quantifying pixels and qualifying them according to the common traits they possess. In place of the standard interpolation, in which pixels are copied and pasted to create larger images, Some cameras employ a software enlargement technique which it is claimed produces results better than can be achieved by conventional interpolation. This copies and pastes pixels - according to where the enlargement software thinks they are needed to make lines, shapes, patterns and contours - to create larger images.

Another limiting factor is the image compression routines used by many digital cameras to enable more images to be stored in a given amount of memory. Some digital camera store images in a proprietary format, requiring the manufacturer's supplied software for access, but most digital cameras compress and save their images in the industry-standard JPEG or FlashPIX formats, readable on almost every graphics package. Both use slightly lossy compression leading to some loss of image quality. However, many cameras have several different compression settings, allowing the user a trade-off between resolution quality and image capacity, including the option to store images in with no compression at all ("CCD raw mode") for the very best quality.


What is a Digital Memory Card.


Contact: email [email protected]

     
1