Sunday, March 2, 2008

Video camera tube

In older video cameras, before the 1990s, a video camera tube or pickup

tube was used instead of a charge-coupled device (CCD). Several types

were in use from the 1930s to the 1980s. These tubes are a type of

cathode ray tube.


vidicon tube (2/3 inch in diameter)Some clarification of terminology is

in order. Any vacuum tube which operates using a focused beam of

electrons is called a cathode ray tube or CRT. However, in the popular

lexicon CRT is usually used to refer to the type of tube used as a

television or computer monitor picture tube. The proper term for these

display tubes is actually kinescope. Kinescopes are simply one of many

types of cathode ray tubes. Others include the types of display tubes

used in oscilloscopes, radar displays, and the camera pickup tubes

described in this article. (In the interest of avoiding further

confusion it will be noted that the word "kinescope" has an alternate

meaning as it has become the popular name for a television film

recording made by focusing a motion picture film camera onto the face

of a kinescope cathode ray tube as was commonly done before the

invention of video tape recording.)

Image dissector

The image dissector was invented by Philo Farnsworth, one of the

pioneers of electronic television, in 1927. It is a type of cathode ray

tube occasionally employed as a camera in industrial television

systems. The image dissector had very poor light sensitivity, and was

useful only where scene illumination exceeded 685 cd/m², but it was

ideal for high light levels such as when engineers wanted to monitor

the bright, hot interior of an industrial furnace. Owing to its lack of

sensitivity, the image dissector was rarely used in TV broadcasting,

except to scan film and other transparencies. It was, however, the

beginning of the electronic TV age.

The image dissector sees the outside world through a glass lens, which

focuses an image through the clear glass wall of the tube onto a

special plate which is coated with a layer of caesium oxide. When light

strikes caesium oxide, the material emits electrons, somewhat like a

mirror that reflects an image made of electrons, rather than light (see

photoelectric effect). These electrons are aimed and accelerated by

electric and magnetic fields onto the dissector's single electron

detector so that only a small portion of the electron image hits the

detector at any given moment. As time passes the electron image is

deflected back and forth and up and down so that the entire image,

portion by portion, can be read by the detector. The output from the

detector is an electric current whose magnitude is a measure of

brightness at a specific point on the image. Electrons that do not hit

the single detector are wasted, rather than stored on the target as in

the image orthicon (described below) which accounts in part for its low

sensitivity (approximately 3000 lux). It has no "storage

characteristic".

The iconoscope

Five years after Kálmán Tihanyi's iconoscope (in 1926) Vladimir

Zworykin patented the idea (Zworkin was refused by the patent offices

of US. and Europe), in May 1931, of projecting an image on a special

plate which was covered with a chemical photoemissive mosaic consisting

of granules of material, a pattern comparable to the receptors of the

human eye. Emission of photoelectrons from each granule in proportion

to the amount of light resulted in a charge image being formed on the

mosaic. Each granule, together with the conductive plate behind the

mosaic, formed a small capacitor, all of these having a common plate.

An electron beam was then swept across the face of the plate from an

electron gun, discharging the capacitors in succession; the resulting

changes in potential at the metal plate constituted the picture signal.

Unlike the image dissector the Zworykin model was much more sensitive,

to about 75 000 lux. It was also easier to manufacture and produced a

very clear image.

Image Orthicon

The image orthicon tube (often abbreviated as IO) was common until the

1960s. A combination of Farnsworth's image dissector and RCA's orthicon

technologies, it replaced the iconoscope/orthicon, which required a

great deal of light to work adequately.

The image orthicon tube was developed by Dr. Albert Rose, Paul K.

Weimer, and Harold B. Law in the employ of the RCA. It represented a

considerable advance in the television field, and after further

development work, RCA created original models about 1939–1940.

Recognizing the merit of the tube, the National Defense Research

Council entered into a contract with RCA whereby NDRC bore the expense

of further development. RCA's development of the more sensitive image

orthicon tube was sufficiently advanced at the end of 1943 to allow the

execution of a production contract with the Navy, and the first tubes

under the contract were delivered in January of 1944.[1][2] RCA began

production of image orthicon cameras for civilian use in the second

quarter of 1946.[3]

While the iconoscope and the intermediate orthicon used capacitance

between a multitude of small but discrete light sensitive collectors

and an isolated signal plate for reading video information, the IO

employed direct charge readings off of a continuous electronically

charged collector. The resultant signal was immune to most extraneous

signal "crosstalk" from other parts of the target, and could yield

extremely detailed images. For instance, IO cameras were used for

capturing Apollo/Saturn rockets nearing orbit long after the networks

had phased them out, as only they could provide sufficient detail.

A properly constructed image orthicon could take television pictures by

candlelight owing to the more ordered light-sensitive area and the

presence of an electron multiplier at the base of the tube, which

operated as a high-efficiency amplifier. It also had a logarithmic

light sensitivity curve similar to the human eye, so the picture looked

more natural. Its defect was that it tended to flare if a shiny object

in the studio caught a reflection of a light, generating a dark halo

around the object on the picture. Image orthicons were used extensively

in the early color television cameras, where their increased

sensitivity was essential to overcome their very inefficient optical

system.

An engineer's nickname for the tube was the "immy", which later was

feminized to become the "Emmy".

Summary of IO Operation: An IO consists of three parts: an image store

("target"), a scanner that reads this image (an electron gun), and a

multiplicative amplifier. In the image store, light falls upon a

photosensitive plate, and is converted into an electron image (borrowed

from Farnsworth's image dissector). These electrons ("rain") are then

accelerated towards the target, causing a "splash" of electrons to be

discharged (secondary electrons). Each image electron ejects, on

average, more than one "splash" electron, and these excess electrons

are soaked up by a positively-charged mesh very near and parallel to

the target (the image electrons also pass through this mesh, whose

positive charge also helps to accelerate the image electrons). The

result is an image painted in positive charge, with the brightest

portions having the largest positive charge.

A sharply focused beam of electrons (a cathode ray) is then scanned

over the back side of the target. The electrons are slowed down just

before reaching the target so that they are absorbed without ejecting

more electrons. This adds negative charge to the positive charge until

the region being scanned reaches some threshold negative charge, at

which point the scanning electrons are reflected rather than absorbed.

These reflected electrons return down the cathode ray tube toward an

electron detector (multiplicative amplifier) surrounding the electron

gun. The number of reflected electrons is a measure of the target's

original positive charge, which, in turn, is a measure of brightness.

In analogy with the image dissector, this beam of electrons is scanned

around the target so that the image is read one small portion at a

time.

Multiplicative amplification is also performed via the splashing of

electrons: a stack of charged pinwheel-like disks surround the electron

gun. As the returning electron beam hits the first pinwheel, it ejects

electrons exactly like the target. These loose electrons are then drawn

toward the next pinwheel back, where the splashing continues for a

number of steps. Consider a single, highly-energized electron hitting

the first stage of the amplifier, causing 2 electrons to be emitted and

drawn towards the next pinwheel. Each of these might then cause two

each to be emitted. Thus, by the start of the third stage, you would

have four electrons to the original one.

What causes the dark halo? The mysterious "dark halo" around bright

objects in an IO-captured image is based in the very fact that the IO

relies on the splashing caused by highly energized electrons. When a

very bright point of light (and therefore very strong electron stream

emitted by the photosensitive plate) is captured, a great preponderance

of electrons is ejected from the image target. So many are ejected that

the corresponding point on the collection mesh can no longer soak them

up, and thus they fall back to nearby spots on the target much as

splashing water when a rock is thrown in forms a ring. Since the

resultant splashed electrons do not contain sufficient energy to eject

enough electrons where they land, they will instead neutralize any

positive charge in that region. Since darker images result in less

positive charge on the target, the excess electrons deposited by the

splash will be read as a dark region by the scanning electron beam.

This effect was actually "cultivated" by tube manufacturers to a

certain extent, as a small, carefully-controlled amount of the dark

halo has the effect of "crispening" the viewed image. (That is, giving

the illusion of being more sharply-focussed that it actually is). The

later Vidicon tube and its descendants (see below) do not exhibit this

effect, and so could not be used for broadcast purposes until special

"detail correction" circuitry could be developed.

Vidicon

A vidicon tube (sometimes called a hivicon tube) is a video camera tube

in which the target material is made of antimony trisulfide (Sb2S3).

The terms vidicon tube and vidicon camera are often used

indiscriminately to refer to video cameras of any type. The principle

of operation of the vidicon camera is typical of other types of video

camera tubes.


Schematic of vidicon tube.The vidicon is a storage-type camera tube in

which a charge-density pattern is formed by the imaged scene radiation

on a photoconductive surface which is then scanned by a beam of

low-velocity electrons. The fluctuating voltage coupled out to a video

amplifier can be used to reproduce the scene being imaged. The

electrical charge produced by an image will remain in the face plate

until it is scanned or until the charge dissipates.

Pyroelectric photocathodes can be used to produce a vidicon sensitive

over a broad portion of the infrared spectrum.

Vidicon tubes are notable for a particular type of interference they

suffered from, known as vidicon microphony. Since the sensing surface

is quite thin, it is possible to bend it with loud noises. The artifact

is characterized by a series of many horizontal bars evident in any

footage (mostly pre 1990) in an environment where loud noise was

present at the time of recording or broadcast. A studio where a loud

rock band was performing or even gunshots or explosions would create

this artifact.

Plumbicon

Plumbicon is a registered trademark of Philips. It was mostly used in broadcast camera applications. These tubes have low output, but a high signal-to-noise ratio. They had excellent resolution compared to Image Orthicons, but lacked the artificially sharp edges of IO tubes, which caused some of the viewing audience to perceive them as softer. CBS Labs invented the first outboard edge enhancement circuits to sharpen the edges of Plumbicon generated images.

Compared to Saticons, Plumbicons had much higher resistance to burn in, and coma and trailing artifacts from bright lights in the shot. Saticons though, usually had slightly higher resolution. After 1980, and the introduction of the diode gun plumbicon tube, the resolution of both types was so high, compared to the maximum limits of the broadcasting standard, that the Saticon's resolution advantage became moot.

While broadcast cameras migrated to solid state Charged Coupled Devices, plumbicon tubes remain a staple imaging device in the medical field.

Narragansett Imaging is the only company now making Plumbicons, and it does so from the factories Philips built for that purpose in Rhode Island, USA. While still a part of the Philips empire, the company purchased EEV's (English Electric Valve) lead oxide camera tube business, and gained a monopoly in lead oxide tube production.

The company says, "In comparison to other image tube technologies, Plumbicon tubes offer high resolution, low lag and superior image quality."


Saticon

Pasecon is a registered trademark of Heimann. Its surface consists of CdSe — Cadmium selenide.

Trinicon

Trinicon is a registered trademark of Sony. It uses a vertically striped RGB color filter over the faceplate of the imaging tube to segment the scan into corresponding red, green and blue segments. Only one tube was used in the camera, instead of a tube for each color, as was standard for color cameras used in television broadcasting. It is used mostly in low-end consumer cameras and camcorders, though Sony also used it in some moderate cost professional cameras in the 1980s, such as the DXC-1800 and BVP-1 models.

Diffraction topography

Diffraction topography (short: "topography") is an X-ray imaging

technique based on Bragg diffraction. Diffraction topographic images

("topographs") record the intensity profile of a beam of X-rays (or,

sometimes, neutrons) diffracted by a crystal. A topograph thus

represents a two-dimensional spatial intensity mapping of reflected

X-rays, i.e. the spatial fine structure of a Bragg spot. This

intensity mapping reflects the distribution of scattering power inside

the crystal; topographs therefore reveal the irregularities in a

non-ideal crystal lattice. X-ray diffraction topography is one variant

of X-ray imaging, making use of diffraction contrast rather than

absorption contrast which is usually used in radiography and computed

tomography (CT).

Topography is used for monitoring crystal quality and visualizing

defects in many different crystalline materials. It has proved helpful

e.g. when developing new crystal growth methods, for monitoring growth

and the crystal quality achieved, and for iteratively optimizing

growth conditions. In many cases, topography can be applied without

preparing or otherwise damaging the sample; it is therefore one

variant of non-destructive testing.

History

After the discovery of x-rays by Röntgen in 1895, and of the

principles of X-ray diffraction by Laue and the Bragg family, it still

took several decades for the benefits of diffraction imaging to be

fully recognized, and the first useful experimental techniques to be

developed. First systematic reports on laboratory topography

techniques date from the early 1940s. In the 1950s and 1960s,

topographic investigations played a role in detecting the nature of

defects and improving crystal growth methods for Ge and (later) Si as

materials for semiconductor microelectronics.

For a more detailed account of the historical development of

topography, see J.F. Kelly - "A brief history of X-ray diffraction

topography".

From about the 1970s on, topography profited from the advent of

synchrotron x-ray sources which provided considerably more intense

x-ray beams, allowing to achieve shorter exposure times, better

contrast, higher spatial resolution, and to investigate smaller

samples or rapidly changing phenomena.

Initial applications of topography were mainly in the field of

metallurgy, controlling the growth of better crystals of various

metals. Topography was later extended to semiconductors, and generally

to materials for microelectronics. A related field are investigations

of materials and devices for X-ray optics, such as monochromator

crystals made of Silicon, Germanium or Diamond, which need to be

checked for defects prior to being used. Extensions of topography to

organic crystals are somewhat more recent. Topography is applied today

not only to volume crystals of any kind, including semiconductor

wafers, but also to thin layers, entire electronic devices, as well as

to organic materials such as protein crystals and others.

Basic principle of topography

The basic working principle of diffraction topography is as follows:

An incident, spatially extended beam (mostly of X-rays, or neutrons)

impinges on a sample. The beam may be either monochromatic, i.e.

consist one single wavelength of X-rays or neutrons, or polychromatic,

i.e. be composed of a mixture of wavelengths ("white beam"

topography). Furthermore, the incident beam may be either parallel,

consisting only of "rays" propagating all along nearly the same

direction, or divergent/convergent, containing several more strongly

different directions of propagation.

When the beam hits the crystalline sample, Bragg diffraction occurs,

i.e. the incident wave is reflected by the atoms on certain lattice

planes of the sample, on condition that it hits those planes at the

right Bragg angle. Diffraction from sample can take place either in

reflection geometry (Bragg case), with the beam entering and leaving

through the same surface, or in transmission geometry (Laue case).

Diffraction gives rise to a diffracted beam, which will leave the

sample and propagate along a direction differing from the incident

direction by the scattering angle .

The cross section of the diffracted beam may or may not be identical

to the one of the incidenct beam. In the case of strongly asymmetric

reflections, the beam size (in the diffraction plane) is considerably

expanded or compressed, with expansion occurring if the incidence

angle is much smaller than the exit angle, and vice-versa.

Independently of this beam expansion, the relation of sample size to

image size is given by the exit angle alone: The apparent lateral size

of sample features parallel to the exit surface is downscaled in the

image by the projection effect of the exit angle.

A homogeneous sample (with a regular crystal lattice) would yield a

homogeneous intensity distribution in the topograph (a "flat" image).

Intensity modulations (topographic contrast) arise from irregularities

in the crystal lattice, originating from various kinds of defects such

as

voids and inclusions in the crystal
phase boundaries (regions of different crystallographic phase,

polytype, ...)
defective areas, non-crystalline (amorphous) areas / inclusions
cracks, surface scratches
stacking faults
dislocations, dislocation bundles
grain boundaries, domain walls
growth striations
point defects or defect clusters
crystal deformation
strain fields
In many cases of defects such as dislocations, topography is not

directly sensitive to the defects themselves (atomic structure of the

dislocation core), but predominantly to the strain field surrounding

the defect region.

Theory of diffraction topography
Theoretical descriptions of contrast formation in X-ray topography are

largely based on the dynamical theory of diffraction. This framework

is helpful in the description of many aspects of topographic image

formation: entrance of an X-ray wavefield into a crystal, propagation

of the wavefield inside the crystal, interaction of wavefield with

crystal defects, altering of wavefield propagation by local lattice

strains, diffraction, multiple scattering, absorption.

The theory is therefore often helpful in the interpretation of

topographic images of crystal defects. The exact nature of a defect

often cannot be deduced directly from the observed image (i.e., a

"backwards calculation" is impossible). Instead, one has to make

assumptions about the structure of the defect, deduce a hypothetical

image from the assumed structure ("forward calculation", based on

theory), and compare with the experimental image. If the match between

both is not good enough, the assumptions have to be varied until

sufficient correspondence is reached. Theoretical calculations, and in

particular numerical simulations by computer based on this theory, are

thus a valuable tool for the interpretation of topographic images.

Contrast mechanisms
The topographic image of a uniform crystal with a perfectly regular

lattice, illuminated by a homogeneous beam, is uniform(no contrast).

Contrast arises when distortions of the lattice (defects, tilted

crystallites, strain) occur; when the crystal is composed of several

different materials or phases; or when the thickness of the crystal

changes across the image domain.

Structure factor contrast
The diffraction power of a crystalline material, and thus the

intensity of the diffracted beam, changes with the type and number of

atoms inside the crystal unit cell. This fact is quantitatively

expressed by the structure factor. Different materials have different

structure factors, and similarly for different phases of the same

material (e.g. for materials crystallizing in several different space

groups). In samples composed of a mixture of materials/phases in

spatially adjacent domains, the geometry of these domains can be

resolved by topography. This is true, for example, also for twinned

crystals, ferroelectric domains, and many others.

Orientation contrast
When a crystal is composed of crystallites with varying lattice

orientation, topographic contrast arises: In plane-wave topography,

only selected crystallites will be in diffracting position, thus

yielding diffracted intensity only in some parts of the image. Upon

sample rotation, these will disappear, and other crystallites will

appear in the new topograph as strongly diffracting. In white-beam

topography, all misoriented crystallites will be diffracting

simultaneously (each at a different wavelength). However, the exit

angles of the respective diffracted beams will differ, leading to

overlapping regions of enhanced intensity as well as to shadows in the

image, thus again giving rise to contrast.

While in the case of tilted crystallites, domain walls, grain

boundaries etc. orientation contrast occurs on a macroscopic scale, it

can also be generated more locally around defects, e.g. due to curved

lattice planes around a dislocation core.

Extinction contrast
Another type of topographic contrast, extinction contrast, is slightly

more complex. While the two above variants are explicable in simple

terms based on geometrical theory (basically, the Bragg law) or

kinematical theory of X-ray diffraction, extinction contrast can be

understood based on dynamical theory.

Qualitatively, extinction contrast arises e.g. when the thickness of a

sample, compared to the respective extinction length (Bragg case) or

Pendelloesung length (Laue case), changes across the image. In this

case, diffracted beams from areas of different thickness, having

suffered different degrees of extinction, are recorded within the same

image, giving rise to contrast. Topographists have systematically

investigated this effect by studying wedge-shaped samples, of linearly

varying thickness, allowing to directly record in one image the

dependence of diffracted intensity on sample thickness as predicted by

dynamical theory.

In addition to mere thickness changes, extinction contrast also arises

when parts of a crystal are diffracting with different strengths, or

when the crystal contains deformed (strained) regions. The governing

quantity for an overall theory of extinction contrast in deformed

crystals is called the effective misorientation



where is the displacement vector field, and and are the directions

of the incident and diffracted beam, respectively.

In this way, different kinds of disturbances are "translated" into

equivalent misorientation values, and contrast formation can be

understood analogously to orientation contrast. For instance, a

compressively strained material requires larger Bragg angles for

diffraction at unchanged wavelength. To compensate for this and to

reach diffraction conditions, the sample needs to be rotated,

similarly as in the case of lattice tilts.

Visibility of defects; types of defect images
To discuss the visibility of defects in topographic images according

to theory, consider the examplary case of a single dislocation: It

will give rise to contrast in topography only if the lattice planes

involved in diffraction are distorted in some way by the existence of

the dislocation. This is true in the case of an edge dislocation if

the scattering vector of the Bragg reflection used is parallel to the

Burgers vector of the dislocation, or at least has a component in the

plane perpendicular to the dislocation line, but not if it is parallel

to the dislocation line. In the case of a screw dislocation, the

scattering vector has to have a component along the Burgers vector,

which is now parallel to dislocation line. As a general rule of thumb,

a dislocation will be invisible in a topograph if the vector product
is zero.

If a defect is visible, often there occurs not only one, but several

distinct images of it on the topograph. Theory predicts three images

of single defects: The so-called direct image, the kinematical image,

and the intermediary image.

Spatial resolution; limiting effects
The spatial resolution achievable in topographic images can be limited

by one or several of three factors: the resolution (grain or pixel

size) of the detector, the experimental geometry, and intrinsic

diffraction effects.

First, the spatial resolution of an image can obviously not be better

than the grain size (in the case of film) or the pixel size (in the

case of digital detectors) with which it was recorded. This is the

reason why topography requires high-resolution X-ray films or CCD

cameras with the smallest pixel sizes available today. Secondly,

resolution can be additionally blurred by a geometric projection

effect. If one point of the sample is a "hole" in an otherwise opaque

mask, then the X-ray source, of finite lateral size S, is imaged

through the hole onto a finite image domain given by the formula



where I is the spread of the image of one sample point in the image

plane, D is the source-to-sample distance, and d is the

sample-to-image distance. The ration S/D corresponds to the angle (in

radians) under which the source appears from the position of the

sample (the angular source size, equivalent to the incident divergence

at one sample point). The achievable resolution is thus best for small

sources, large sample distances, and small detector distances. This is

why the detector (film) needed to be placed very close to the sample

in the early days of topography; only at synchrotron, with their small

S and (very) large D, could larger values of d finally be afforded,

introducing much more flexibility into topography experiments.

Thirdly, even with perfect detectors and ideal geometric conditions,

the visibilty of special contrast features, such as the images of

single dislocations, can be additionally limited by diffraction

effects. A dislocation in a perfect crystal matrix gives rise to

contrast only in those regions where the local orientation of the

crystal lattice differs from average orientation by more than about

the Darwin width of the Bragg reflection used. A quantitative

description is provided by the dynamical theory of X-ray diffraction.

As a result, and somehow counter-intuitively, the widths of

dislocation images become narrower when the associated rocking curves

are large. Thus, strong reflections of low diffraction order are

particularly appropriate for topographic imaging. They permit

topographists to obtain narrow, well-resolved images of dislocations,

and to separate single dislocations even when the dislocation density

in a material is rather high. In more unfavourable cases (weak,

high-order reflections, higher photon energies), dislocation images

become broad, diffuse, and overlap for high and medium dislocation

densities. Highly ordered, strongly diffracting materials - like

minerals or semiconductors - are generally unproblematic, whereas e.g.

protein crystals are particularly challenging for topographic imaging.

Apart from the Darwin width of the reflection, the width of single

dislocation images may additionally depend on the Burgers vector of

the dislocation, i.e. both its length and its orientation (relative to

the scattering vector), and, in plane wave topography, on the angular

departure from the exact Bragg angle. The latter dependence follows a

reciprocity law, meaning that dislocations images become narrower

inversely as the angular distance grows. So-called weak beam

conditions are thus favourable in order to obtain narrow dislocation

images.

Experimental realization - instrumentation
To conduct a topographic experiment, three groups of instruments are

required: an x-ray source, potentially including appropriate x-ray

optics; a sample stage with sample manipulator (diffractometer); and a

two-dimensionally resolving detector (most often X-ray film or

camera).

X-ray source
The x-ray beam used for topography is generated by an x-ray source,

typically either a laboratory x-ray tube (fixed or rotating) or a

synchrotron source. The latter offers advantages due to its higher

beam intensity, lower divergence, and its continuous wavelength

spectrum. X-ray tubes are still useful, however, due to easier access

and continuous availability, and are often used for initial screening

of samples and/or training of new staff.

For white beam topography, not much more is required: most often, a

set of slits to precisely define the beam shape and a (well polished)

vacuum exit window will suffice. For those topography techniques

requiring a monochromatic x-ray beam, an additional crystal

monochromator is mandatory. A typical configuration at synchrotron

sources is a combination of two Silicon crystals, both with surfaces

oriented parallel to [111]-lattice planes, in geometrically opposite

orientation. This guarantees relatively high intensity, good

wavelength selectivity (about 1 part in 10000) and the possibility to

change the target wavelength without having to change the beam

position ("fixed exit").

Sample stage
To place the sample under investigation into the x-ray beam, a sample

holder is required. While in white-beam techniques a simple fixed

holder is sometimes sufficient, experiments with monochromatic

techniques typically require one or more degrees of freedom of

rotational motion. Samples are therefore placed on a diffractometer,

allowing to orient the sample along one, two or three axes. If the

sample needs to be displaced, e.g. in order to scan its surface

through the beam in several steps, additional translational degrees of

freedom are required

Detector

After being scattered by the sample, the profile of the diffracted

beam needs to be detected by a two-dimensionally resolving X-ray

detector. The classical "detector" is X-ray sensitive film, with

nuclear plates as a traditional alternative. The first step beyond

these "offline" detectors were the so-called image plates, although

limited in readout speed and spatial resolution. Since about the

mid-1990s, CCD cameras have emerged as a practical alternative,

offering many advantages such as fast online readout and the

possibility to record entire image series in place. X-ray sensitive

CCD cameras, especially those with spatial resolution in the

micrometer range, are now well established as electronic detectors for

topography. A promising further option for the future may be pixel

detectors, although their limited spatial resolution may restrict

their usefulness for topography.

General criteria for judging the practical usefulness of detectors for

topography applications include spatial resolution, sensitivity,

dynamic range ("color depth", in black-white mode), readout speed,

weight (important for mounting on diffractometer arms), and price.

Systematic overview of techniques and imaging conditions
The manifold topographic techniques can be categorized according to

several criteria. One of them is the distinction between

restricted-beam techniques on the one hand (such as section topography

or pinhole topography) and extended-beam techniques on the other hand,

which use the full width and intensity of the incoming beam. Another,

independent distinction is between integrated-wave topography, making

use of the full spectrum of incoming X-ray wavelengths and

divergences, and plane-wave (monochromatic) topopgraphy, more

selective in both wavelengths and divergence. Integrated-wave

topography can be realized as either single-crystal or double-crystal

topography. Further distinctions include the one between topography in

reflection geometry (Bragg-case) and in transmission geometry (Laue

case).

Experimental techniques I - Some classical topographic techniques
The following is an exemplary list of some of the most important

experimental techniques for topography:

White-beam
White-beam topography uses the full bandwidth of X-ray wavelengths in

the incoming beam, without any wavelength filtering (no

monochromator). The technique is particularly useful in combination

with synchrotron radiation sources, due to their wide and continuous

wavelength spectrum. In contrast to the monochromatic case, in which

accurate sample adjustment is often necessary in order to reach

diffraction conditions, the Bragg equation is always and automatically

fulfilled in the case of a white X-ray beam: Whatever the angle at

which the beam hits a specific lattice plane, there is always one

wavelength in the incident spectrum for which the Bragg angle is

fulfilled just at this precise angle (on condition that the spectrum

is wide enough). Whithe-beam topography is therefore a very simple and

fast technique. Disadvantages include the high X-ray dose, possibly

leading to radiation damage to the sample, and the necessity to

carefully shield the experiment.

White-beam topography produces a pattern of several diffraction spots,

each spot being related to one specific lattice plane in the crystal.

This pattern, typically recorded on X-ray film, corresponds to a Laue

pattern and shows the symmetry of the crystal lattice. The fine

structure of each single spot (topograph) is related to defects and

distortions in the sample. The distance between spots, and the details

of contrast within in single spot, depend on the distance between

sample and film; this distance is therefore an important degree of

freedom for white-beam topography experiments.

Deformation of the crystal will cause variation in the size of the

diffraction spot. For a cylindrically bent crystal the Bragg planes in

the crystal lattice will lie on Archimedean spirals (with the

exception of those orientated tangentially and radially to the

curvature of the bend, which are respectively cylindrical and planar),

and the degree of curvature can be determined in a predictable way

from the length of the spots and the geometry of the set-up.[1]

White-beam topographs are useful for fast and comprehensive

visualization of crystal defect and distortions. They are, however,

rather difficult to analyze in any quantitative way, and even a

qualitative interpretation often requires considerable experience and

time.

Plane-wave topography
Plane-wave topography is in some sense the opposite of white-beam

topography, making use of monochromatic (single-wavelength) and

parallel incident beam. In order to achieve diffraction conditions,

the sample under study must be precisely aligned. The contrast

observed strongly depends on the exact position of the angular working

point on the rocking curve of the sample, i.e. on the angular distance

between the actual sample rotation position and the theoretical

position of the Bragg peak. A sample rotation stage is therefore an

essential instrumental precondition for controlling and varying the

contrast conditions.

Section topography

Enlarged synchrotron X-ray transmission section topograph of gallium

nitride (11.0 diffraction) on top of sapphire (0-1.0 diffraction).

X-ray section beam width was 15 micrometers. Diffraction vector g

projection is shown.While the above techniques use a spatially

extended, wide incident beam, section topography is based on a narrow

beam on the order of some 10 micrometers (in one or, in the case of

pinhole topography with a pencil beam, in both lateral dimensions).

Section topographs therefore investigate only a restricted volume of

the sample. On its path through the crystal, the beam is diffracted at

different depths, each one contributing to image formation on a

different location on the detector (film). Section topography can

therefore be used for depth-resolved defect analysis.

In section topography, even perfect crystals display fringes. The

technique is very sensitive to crystalline defects and strain, as

these distort the fringe pattern in the topograph. Quantitative

analysis can be performed with the help of image simulation by

computer algorithms, usually based on the Takagi-Taupin equations.

An enlarged synchrotron X-ray transmission section topograph on the

right shows a diffraction image of the section of a sample having a

gallium nitride (GaN) layer grown by metal-organic vapour phase

epitaxy on sapphire wafer. Both the epitaxial GaN layer and the

sapphire substrate show numerous defects. The GaN layer actually

consists of about 20 micrometers wide small-angle grains connected to

each other. Strain in the epitaxial layer and substrate is visible as

elongated streaks parallel to the diffraction vector direction. The

defects on the underside of the sapphire wafer section image are

surface defects on the unpolished backside of the sapphire wafer.

Between the sapphire and GaN the defects are interfacial defects.

Projection topography
The setup for projection topography (also called "traverse"

topography") is essentially identical to section topography, the

difference being that both sample and film are now scanned laterally

(synchronously) with respect to the narrow incident beam. A projection

topograph therefore corresponds to the superposition of many adjacent

section topographs, able to investigate not just a restricted portion,

but the entire volume of a crystal.

The technique is rather simple and has been in routine use at "Lang

cameras" in many research laboratories.

Experimental techniques II - Advanced topographic techniques

Topography at synchrotron sources
The advent of synchrotron X-ray sources has been beneficial to X-ray

topography techniques. Several of the properties of synchrtron

radiation are advantageous also for topography applications: The high

collimation (more precisely the small angular source size) allows to

reach higher geometrical resolution in topographs, even at larger

sample-to-detector distances. The continuous wavelength spectrum

facilitates white-beam topography. The high beam intensities available

at synchrotrons make it possible to investigate small sample volumes,

to work at weaker reflections or further off Bragg-conditions (weak

beam conditions), and to achieve shorter exposure times. Finally, the

discrete time structure of synchrotron radiation permits topographists

to use stroboscopic methods to efficiently visualize time-dependent,

periodically recurrent structures (such as acoustic waves on crystal

surfaces).

Neutron topography
Diffraction topography with neutron radiation has been in use for

several decades, mainly at research reactors with high neutron beam

intensities. Neutron topography can make use of contrast mechanisms

that are partially different from the X-ray case, and thus serve e.g.

to visualize magnetic structures. However, due to the comparatively

low neutron intensities, neutron topography requires long exposure

times. Its use is therefore rather limited in practice.

Topography applied to organic crystals
Topography is "classically" applied to inorganic crystals, such a metals and semiconductors. However, it is nowadays applied more and more often also to organic crystals, most notably proteins. Topographic investigations can help to understand and optimize crystal growth processes also for proteins. Numerous studies have been initiated in the last 5-10 years, using both white-beam and plane-wave topography.

Although considerable progress has been achieved, topography on protein crystals remains a difficult discipline: Due to large unit cells, small structure factors and high disorder, diffracted intensities are weak. Topographic imaging therefore requires long exposure times, which may lead to radiation damage of the crystals, generating in the first place the defects which are then imaged. In addition, the low structure factors lead to small Darwin widths and thus to broad dislocation images, i.e. rather low spatial resolution. Nevertheless, in some cases, protein crystals were reported to be perfect enough to achieve images of single dislocations.

Experimental techniques III - Special techniques and recent developments

Reticulography
A relatively new topography-related technique (first published in 1996) is the so-called reticulography. Based on white-beam topography, the new aspect consists in placing a fine-scaled metallic grid ("reticule") between sample and detector. The metallic grid lines are highly absorbing, producing dark lines in the recorded image. While for flat, homgeneous sample the image of the grid is rectilinear, just as the grid itself, strongly deformed grid images may occur in the case of tilted or strained sample. The deformation results from Bragg angle changes (and thus different directions of propagation of the diffracted beams) due to lattice parameter differences (or tilted crystallites) in the sample. The grid serves to split the diffracted beam into an array of microbeams, and to backtrace the propagation of each individual microbeam onto the sample surface. By recording reticulographic images at several sample-to-detector distances, and appropriate data processing, local distributions of misorientation across the sample surface can be derived.

A. R. Lang and A. P. W. Makepeace: Reticulography: a simple and sensitive technique for mapping misorientations in single crystals. Journal of Synchrotron Radiation (1996) 3, 313-315.
Lang, A. R. and Makepeace, A. P. W.: Synchrotron X-ray reticulographic measurement of lattice deformations associated with energetic ion implantation in diamond. Journal of Applied Crystallography (1999) 32, 1119-1126.

Digital topography
The use of electronic detectors such as X-ray CCD cameras, replacing traditional X-ray film, facilitates topography in many ways. CCDs achieve online readout in (almost) real-time, dispensing experimentalists of the need to develop films in a dark room. Drawbacks with respect to films are the limited dynamic range and, above all, the moderate spatial resolution of commercial CCD cameras, making the development of dedicated CCD cameras necessary for high-resolution imaging. A further, decisive advantage of digital topography is the possibility to record series of images without changing detector position, thanks to online readout. This makes it possible, without complicated image registration procedures, to observe time-dependent phenomena, to perform kinetic studies, to investigate processes of device degradation and radiation damage, and to realize

Time-resolved (stroboscopic) topography; Imaging of surface acoustic waves
To image time-dependent, periodically fluctuating phenomena, topography can be combined with stroboscopic exposure techniques. In this way, one selected phase of a sinusoidally varying movement is selectively images as a "snapshot". First applications were in the field of surface acoustic waves on semiconductor surfaces.

E. Zolotoyabko, D. Shilo, W. Sauer, E. Pernot, and J. Baruchel. Visualization of 10 mu m surface acoustic waves by stroboscopic X-ray topography. Appl.Phys.Lett. (1998) 73(16), 2278-2280.
W. Sauer, M. Streibl, T. Metzger, A. Haubrich, S. Manus, W. A., J. Peisl, J. Mazuelas, J. Härtwig, and J. Baruchel: X-ray imaging and diffraction from surface phonons on GaAs. Appl.Phys.Lett.(1999) 75(12), 1709-1711.

Topo-tomography; 3D dislocation distributions
By combining topographic image formation with tomographic image reconstruction, distributions of defects can be resolved in three dimensions. Unlike "classical" computed tomography (CT), image contrast is not based on differences in absorption (absorption contrast), but on the usual contrast mechanisms of topography (diffraction contrast). In this way, three-dimensional distributions of dislocations in crystals have been imaged.

Sequential topography / Rocking Curve Imaging
Plane-wave topography can be made to extract an additional wealth of information from a sample by recording not just one image, but an entire sequence of topographs all along the sample's rocking curve. By following the diffracted intensity in one pixel across the entire sequence of images, local rocking curves from very small areas of sample surface can be reconstructed. Although the required post-processing and numerical analysis is sometimes moderately demanding, the effort is often compensated by very comprehensive information on the sample's local properties. Quantities that become quantitatively measurable in this way include local scattering power, local lattice tilts (crystallite misorientation), and local lattice quality and perfection. Spatial resolution is, in many cases, essentially given by the detector pixel size.

The technique of sequential topography, in combination with appropriate data analysis methods also called rocking curve imaging, constitutes a method of microdiffraction imaging, i.e. a combination of X-ray imaging with X-ray diffractometry.


D. Lübbert, T. Baumbach, J. Härtwig, E. Boller, and E. Pernot. mu m-resolved high resolution X-ray diffraction imaging for semiconductor quality control. Nucl. Instr. Meth. B (2000) 160(4), 521-527.
J. Hoszowska, A. Freund, E. Boller, J. Sellschop, G. Level, J. Härtwig, R. Burns, M. Rebak, and J. Baruchel. Characterizatino of synthetic diamond crystals by spatially resolved rocking curve measurements. J.Phys.D:Appl.Phys. (2001) 34, 47-51.
P. Mikulík, D. Lübbert, D. Korytár, P. Pernot, and T. Baumbach. Synchrotron area diffractometry as a tool for spatial high-resolution three-dimensional lattice misorientation mapping. J.Phys.D:Appl.Phys. (2003) 36(10), A74-A78.
Jeffrey J. Lovelace, Cameron R. Murphy, Reinhard Pahl, Keith Bristerb, and Gloria E. O. Borgstahl: Tracking reflections through cryogenic cooling with topography. J. Appl. Cryst. (2006) 39, 425-432.

MAXIM
The "MAXIM" (MAterials X-ray IMaging) method is another method combining diffraction analysis with spatial resolution. It can be viewed as serial topography with additional angular resolution in the exit beam. In contrast to the Rocking Curve Imaging method, it is more appropriate for more highly disturbed (polycrystalline) materials with lower crystalline perfection. The difference on the instrumental side is that MAXIM uses an array of slits / small channels (a so-called "multi-channel plate" (MCP), the two-dimensional equivalent of a Soller slit system) as an additional X-ray optical element between sample and CCD detector. These channels transmit intensity only in specific, parallel directions, and thus guarantee a one-to-one-relation between detector pixels and points on the sample surface, which would otherwise not be given in the case of materials with high strain and/or a strong mosaicity. The spatial resolution of the method is limited by a combination of detector pixel size and channel plate periodicity, which in the ideal case are identical. The angular resolution is mostly given by the aspect ratio (length over width) of the MCP channels.

T. Wroblewski, S. Geier et al.. X-ray imaging of polycrystalline materials. Rev. Sci. Instr. (1995) 66, 3560–3562.
T. Wroblewski, O. Clauß et al.: A new diffractometer for materials science and imaging at HASYLAB beamline G3. Nucl. Inst. Meth. A (1999) 428, 570–582.
A. Pyzalla, L. Wang, E. Wild, and T. Wroblewski: Changes in microstructure, texture and residual stresses on the surface of a rail resulting from friction and wear. Wear (2001) 251, 901–907.

Wednesday, November 28, 2007





DIGITAL CAMERA

A digital camera is a camera that takes video or still photographs, or both, digitally by recording images on a light-sensitive sensor.

Many compact digital still cameras can record sound and moving video as well as still photographs. In the Western market, digital cameras outsell their 35 mm film counterparts.[1]

Digital cameras can include features that are not found in film cameras, such as displaying an image on the camera's screen immediately after it is recorded, the capacity to take thousands of images on a single small memory device, the ability to record video with sound, the ability to edit images, and deletion of images allowing re-use of the storage they occupied. Digital cameras are incorporated into many devices ranging from PDAs and mobile phones (called camera phones) to vehicles. The Hubble Space Telescope and other astronomical devices are essentially specialised digital cameras.

Classification

Digital cameras can be classified into several categories:

Video cameras

Video cameras are classified as devices whose main purpose is to record moving images.

Professional video cameras such as those used in television and movie production. These typically have multiple image sensors (one per color) to enhance resolution and color gamut. Professional video cameras usually do not have a built-in VCR or microphone.
Camcorders used by amateurs. They generally include a microphone to record sound, and feature a small liquid crystal display to watch the video during taping and playback.
Webcams are digital cameras attached to computers, used for video conferencing or other purposes. Webcams can capture full-motion video as well, and some models include microphones or zoom ability.
In addition, many Live-Preview Digital cameras have a "movie" mode in which images are continuously acquired at a frame rate sufficient for video.

Live-preview digital cameras

The term digital still camera (DSC) usually implies a live-preview digital camera, which uses an electronic screen, usually a rear-mounted liquid crystal display, as the principal means of framing and previewing before taking the photograph, and for viewing stored photogrqaphs. All use either a charge-coupled device (CCD) or a CMOS image sensor to sense the light intensities across the focal plane.

Many live-preview cameras have a movie mode, and many camcorders can take still photographs. However, still cameras take better still photographs than camcorders, and vice versa; there is still a need for distinct still and motion picture cameras.

Images may be transferred to a computer, printer or other device in a number of ways: the USB mass storage device class makes the camera appear to the computer as if it were a disk drive; the Picture Transfer Protocol (PTP) and its derivatives may be used; Firewire is sometimes supported; and the storage device may simply be removed from the camera and inserted into another device.

Live-preview cameras may be compact or subcompact, or the larger and more sophisticated bridge cameras.

Compact digital cameras

Compact cameras are designed to be small and portable; the smallest are described as subcompacts. Compact cameras are usually designed to be easy to use, sacrificing advanced features and picture quality for compactness and simplicity; images can only be stored using lossy JPEG compression. Most have a built-in flash usually of low power, sufficient for nearby subjects. They may have limited motion picture capability. Compacts often have macro capability, but if they have zoom capability the range is usually less than for bridge and DSLR cameras. They have a greater depth of field, allowing objects within a large range of distances from the camera to be in sharp focus. They are particularly suitable for casual and "snapshot" use.

Bridge cameras

Bridge or SLR-like cameras are higher-end live-preview cameras that physically resemble DSLRs and share with them some advanced features, but share with compacts the live-preview design and small sensor sizes.


Fujifilm FinePix S9000Bridge cameras often have superzoom lenses which provide a very wide zoom range, typically 12:1, which is attained at the cost of some distortions, including barrel and pincushion distortion, to a degree which varies with lens quality. These cameras are sometimes marketed as and confused with digital SLR cameras since the appearance is similar. Bridge cameras lack the mirror and reflex system of DSLRs, have so far been fitted with fixed (non-interchangeable) lenses (although in some cases accessory wide-angle or telephoto converters can be attached to the lens), can usually take movies with sound, and the scene is composed by viewing either the liquid crystal display or the electronic viewfinder (EVF). They are usually slower to operate than a true digital SLR, but they are capable of very good image quality while being more compact and lighter than DSLRs. The high-end models of this type have comparable resolutions to low and mid-range DSLRs. Many of these cameras can store images in lossless RAW format as an option to lossy JPEG compression. The majority have a built-in flash, often a unit which flips up over the lens. The guide number tends to be between 11 and 15.

Digital single lens reflex cameras

Digital single-lens reflex cameras (DSLRs) are digital cameras based on film single-lens reflex cameras (SLRs), both types are characterized by the existence of a mirror and reflex system. See the main article on DSLRs for a detailed treatment of this category.

Digital rangefinders


A rangefinder is a user-operated optical mechanism to measure subject distance once widely used on film cameras. Most digital cameras measure subject distance automatically using acoustic or electronic techniques, but it is not customary to say that they have a rangefinder. The term rangefinder alone is sometimes used to mean a rangefinder camera, that is, a film camera equipped with a rangefinder, as distinct from an SLR or a simple camera with no way to measure distance.

Professional modular digital camera systems

This category includes very high end professional equipment that can be assembled from modular components (winders, grips, lenses, etc.) to suit particular purposes. Common makes include Hasselblad and Mamiya. They were developed for medium or large format film sizes, as these captured greater detail and could be enlarged more than 35 mm.

Typically these cameras are used in studios for commercial production; being bulky and awkward to carry they are rarely used in action or nature photography. They can often be converted into either film or digital use by changing out the back part of the unit, hence the use of terms such as a "digital back" or "film back." These cameras are very expensive (up to $40,000) and are typically not seen in the hands of consumers.

Line-scan camera systems

A line-scan camera is a camera device containing a line-scan image sensor chip, and a focusing mechanism. These cameras are almost solely used in industrial settings to capture an image of a constant stream of moving material. Unlike video cameras, line-scan cameras use a single array of pixel sensors, instead of a matrix of them. Data coming from the line-scan camera has a frequency, where the camera scans a line, waits, and repeats. The data coming from the line-scan camera is commonly processed by a computer, to collect the one-dimensional line data and to create a two-dimensional image. The collected two-dimensional image data is then processed by image-processing methods for industrial purposes.

Line-scan technology is capable of capturing data extremely fast, and at very high image resolutions. Usually under these conditions, resulting collected image data can quickly exceed 100MB in a fraction of a second. Line-scan-camera–based integrated systems, therefore are usually designed to streamline the camera's output in order to meet the system's objective, using computer technology which is also affordable.

Line-scan cameras intended for the parcel handling industry can integrate adaptive focusing mechanisms to scan 6 sides of any rectangular parcel in focus, regardless of angle, and size. The resulting 2-D captured images could contain, but are not limited to: 1D and 2D barcodes, address information, and any pattern that can be processed via image processing methods. Since the images are 2-D, they are also human-readable and can be viewable on a computer screen. Advanced integrated systems include video coding and optical character recognition (OCR)

Conversion of film cameras to digital

When digital cameras became common, a question many photographers asked was if their film cameras could be converted to digital. The answer was yes and no. For the majority of 35 mm film cameras the answer is no, the reworking and cost would be too great, especially as lenses have been evolving as well as cameras. For the most part a conversion to digital, to give enough space for the electronics and allow a liquid crystal display to preview, would require removing the back of the camera and replacing it with a custom built digital unit.

Many early professional SLR cameras, such as the NC2000 and the Kodak DCS series, were developed from 35 mm film cameras. The technology of the time, however, meant that rather than being a digital "back" the body was mounted on a large and blocky digital unit, often bigger than the camera portion itself. These were factory built cameras, however, not aftermarket conversions.

A notable exception was a device called the EFS-1, which was developed by Silicon Film from ca. 1998–2001. It was intended to insert into a film camera in the place of film, giving the camera a 1.3 MP resolution and a capacity of 24 shots. Units were demonstrated, and in 2002 the company was developing the EFS-10, a 10 MP device that was more a true digital back.

A few 35 mm cameras have had digital backs made by their manufacturer, Leica being a notable example. Medium format and large format cameras (those using film stock greater than 35 mm), have users who are capable of and willing to pay the price a low unit production digital back requires, typically over $10,000. These cameras also tend to be highly modular, with handgrips, film backs, winders, and lenses available separately to fit various needs.

The very large sensor these backs use leads to enormous image sizes. The largest in early 2006 is the Phase One's P45 39 MP imageback, creating a single TIFF image of size up to 224.6 MB. Medium format digitals are geared more towards studio and portrait photography than their smaller DSLR counterparts, the ISO speed in particular tends to have a maximum of 400, versus 6400 for some DSLR cameras.

HISTORY

Early development

Steven Sasson, an Eastman Kodak engineer, with his prototype digital cameraThe concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Eugene F. Lally of the Jet Propulsion Laboratory published the first description of how to produce still photos in a digital domain using a mosaic photosensor.[2] The purpose was to provide onboard navigation information to astronauts during missions to planets. The mosaic array periodically recorded still photos of star and planet locations during transit and when approaching a planet provided additional stadiametric information for orbiting and landing guidance. The concept included camera design elements foreshadowing the first digital camera.

Texas Instruments designed a filmless analog camera in 1972, but it is not known if it was ever built. The first recorded attempt at building a digital camera was by Steven Sasson, an engineer at Eastman Kodak.[3] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[4] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a cassette tape, had a resolution of 0.01 megapixel (10,000 pixels), and took 23 seconds to capture its first image in December of 1975. The prototype camera was a technical exercise, not intended for production.


Analog electronic cameras

Handheld electronic cameras, in the sense of a device meant to be carried and used like a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera based on television technology that recorded to a 2 × 2 inch "video floppy". In essence it was a video movie camera that recorded single frames, 50 per disk in field mode and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions.

Analog cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated this model at the 1984 Olympics, printing the images in newspapers. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The "video floppy" disks later had several reader devices available for viewing on a screen, but were never standardized as a computer drive.

The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the Tiananmen Square protests of 1989 and the first Gulf War in 1991.

The first analog camera marketed to consumers may have been the Canon RC-250 Xapshot in 1988. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks.


The arrival of true digital cameras

The first true digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 16 MB internal memory card that used a battery to keep the data in memory. This camera was never marketed in the United States, and has not been confirmed to have shipped even in Japan.

The first commercially available digital camera was the 1990 Dycam Model 1; it also sold as the Logitech Fotoman. It used a CCD image sensor, stored pictures digitally, and connected directly to a PC for download.[5][6][7]

In 1991, Kodak brought to market the Kodak DCS-100, the beginning of a long line of professional SLR cameras by Kodak that were based in part on film bodies, often Nikons. It used a 1.3 megapixel sensor and was priced at $13,000.

The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 in 1995, and the first camera to use CompactFlash was the Kodak DC-25 in 1996.

The marketplace for consumer digital cameras was originally low resolution (either analog or digital) cameras built for utility. In 1997 the first megapixel cameras for consumers were marketed. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995.

1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely by a major manufacturer, and at a cost of under $6,000 at introduction was affordable by professional photographers and high end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned.

Also in 1999, Minolta introduced the RD-3000 D-SLR at 2.7 megapixels. This camera found many professional adherents. Limitations to the system included the need to use Vectis lenses which were designed for APS size film. The camera was sold with 5 lenses at various focal lengths and ranges (zoom). Minolta did not produce another D-SLR until September 2004 when they introduced the Alpha 7D (Alpha in Japan, Maxxum in North America, Dynax in the rest of the world) but using the Minolta A-mount system from its 35 mm line of cameras.

2003 saw the introduction of the Canon 300D, also known as the Digital Rebel, a 6 megapixel camera and the first DSLR priced under $1,000, and marketed to consumers.

Image resolution

The resolution of a digital camera is often limited by the camera sensor (typically a CCD or CMOS chip) that turns light into discrete signals, replacing the job of film in traditional photography. The sensor is made up of millions of "buckets" that essentially count the number of photons that strike the sensor. This means that the brighter the image at that point the larger of a value that is read for that pixel. Depending on the physical structure of the sensor a color filter array may be used which requires a demosaicing/interpolation algorithm. The number of resulting pixels in the image determines its "pixel count". For example, an image 640x480 big would have 307,200 pixels or is approximately 307 kilopixel image; and an image 3872x2592 big would have 10,036,224 pixels or is approximately a 10 megapixel image.

The pixel count alone is commonly presumed to indicate the resolution of a camera, but this is a misconception. There are several other factors that impact a sensor's resolution. Some of these factors include sensor size, lens quality, and the organization of the pixels (for example, a monochrome camera without a Bayer filter mosaic has a higher resolution than a typical color camera). Many digital compact cameras are criticized for having excessive pixels, in that the sensors can be so small that the resolution of the sensor is greater than the lens could possibly deliver. Some users have experienced more fine grain noise in the images from higher pixel resolution sensors (due to less light collected by each pixel as the pixel size decreases) than from earlier low resolution cameras.


Australian recommended retail price of Kodak digital camerasAs the technology has improved, costs have decreased dramatically. Measuring the "pixels per dollar" as a basic measure of value for a digital camera, there has been a continuous and steady increase in the number of pixels each dollar buys in a new camera consistent with the principles of Moore's Law. This predictability of camera prices was first presented in 1998 at the Australian PMA DIMA conference by Barry Hendy and since referred to as "Hendy's Law".[8]

Since there are only a very few aspect ratios that are commonly used (4:3 and 3:2), the number of sensor sizes are more limited. Furthermore, sensor manufacturers don't manufacture every sensor size possible but take incremental steps in sizes. For example, in 2007 the three largest sensors (in terms of pixel count) used by Canon are the 21.1, 16.6, and 12.8 megapixel CMOS sensors. The following is a table of sensors commercially used in digital cameras

Methods of image capture

Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters.

The first method is often called single-shot, in reference to the number of times the camera's sensor is exposed to the light passing through the camera lens. Single-shot capture systems use either one CCD with a Bayer filter mosaic it, or three separate image sensors (one each for the primary additive colors red, green, and blue) which are exposed to the same image via a beam splitter.

The second method is referred to as multi-shot because the sensor is exposed to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique. The most common originally was to use a single image sensor with three filters (once again red, green and blue) passed in front of the sensor in sequence to obtain the additive color information. Another multiple shot method utilized a single CCD with a Bayer filter but actually moved the physical location of the sensor chip on the focus plane of the lens to "stitch" together a higher resolution image than the CCD would allow otherwise. A third version combined the two methods without a Bayer filter on the chip.

The third method is called scanning because the sensor moves across the focal plane much like the sensor of a desktop scanner. Their linear or tri-linear sensors utilize only a single line of photosensors, or three lines for the three colors. In some cases, scanning is accomplished by rotating the whole camera; a digital rotating line camera offers images of very high total resolution.

The choice of method for a given capture is of course determined largely by the subject matter. It is usually inappropriate to attempt to capture a subject that moves with anything but a single-shot system. However, the higher color fidelity and larger file sizes and resolutions available with multi-shot and scanning backs make them attractive for commercial photographers working with stationary subjects and large-format photographs.

Recently, dramatic improvements in single-shot cameras and RAW image file processing have made single shot, CCD-based cameras almost completely predominant in commercial photography, not to mention digital photography as a whole. CMOS-based single shot cameras are also somewhat common.


Filter mosaics, interpolation, and aliasing

The Bayer arrangement of color filters on the pixel array of an image sensorIn most current consumer digital cameras, a Bayer filter mosaic is used, in combination with an optical anti-aliasing filter to reduce the aliasing due to the reduced sampling of the different primary-color images. A demosaicing algorithm is used to interpolate color information to create a full array of RGB image data.

Cameras that use a beam-splitter single-shot 3CCD approach, three-filter multi-shot approach, or Foveon X3 sensor do not use anti-aliasing filters, nor demosaicing.

Firmware in the camera, or a software in a raw converter program such as Adobe Camera Raw, interprets the raw data from the sensor to obtain a full color image, because the RGB color model requires three intensity values for each pixel: one each for the red, green, and blue (other color models, when used, also require three or more values per pixel). A single sensor element cannot simultaneously record these three intensities, and so a color filter array (CFA) must be used to selectively filter a particular color for each pixel.

The Bayer filter pattern is a repeating 2×2 mosaic pattern of light filters, with green ones at opposite corners and red and blue in the other two positions. The high proportion of green takes advantage of properties of the human visual system, which determines brightness mostly from green and is far more sensitive to brightness than to hue or saturation. Sometimes a 4-color filter pattern is used, often involving two different hues of green. This provides potentially more accurate color, but requires a slightly more complicated interpolation process.

The color intensity values not captured for each pixel can be interpolated (or guessed) from the values of adjacent pixels which represent the color being calculated.


Connectivity

Many digital cameras can connect directly to a computer to transfer data:

Early cameras used the PC serial port. USB is now the most widely used method ( Most cameras are viewable as USB Mass Storage), though some have a FireWire port. Some cameras use USB PTP mode for connection instead of USB MSC; some offer both modes.
Other cameras use wireless connections, via Bluetooth or IEEE 802.11 Wi-Fi, such as the Kodak EasyShare One.
A common alternative is the use of a card reader which may be capable of reading several types of storage media, as well as high speed transfer of data to the computer. Use of a card reader also avoids draining the camera battery during the download process, as the device takes power from the USB port. An external card reader allows convenient direct access to the images on a collection of storage media. But if only one storage card is in use, moving it back and forth between the camera and the reader can be inconvenient.

Many modern cameras offer the PictBridge standard, which allows sending data directly to printers without the need of a computer.

Integration

Many devices include digital cameras built into or integrated into them. For example, mobile phones often include digital cameras; those that do are sometimes known as camera phones. Other small electronic devices (especially those used for communication) such as PDAs, laptops and BlackBerry devices often contain an integral digital camera. Additionally, some digital camcorders contain a digital camera built into them.

Due to the limited storage capacity and general emphasis on convenience rather than image quality in such integrated or converged devices, the vast majority of these devices store images in the lossy but compact JPEG file format.


Storage

Most cameras utilize some form of removable storage to store data. While the vast majority of the media types are some form of flash memory (CompactFlash, SD, etc.) there are storage methods that use other technologies such as Microdrives (very small hard disk drives), CD single (185 MB), and 3.5" floppy disks.

Although JPEG is the most common method of compressing image date, there are other methods such as TIFF and RAW (the latter being highly non-standardized across brands and even models of a brand). Most cameras include Exif data that provides metadata about the picture. Such Exif data include aperture, exposure time, focal length, date & time taken, and camera model.

Some of the removable storage technologies include all of the following:

CompactFlash (CF-I)

Memory Stick

Microdrive (CF-II)

MultiMediaCard (MMC)

MiniSD Card

microSD (right)

Secure Digital card (SD)

SmartMedia

USB flash drive

xD-Picture Card (xD)

3.5" floppy disks

Mini CD (left)

Other formats include:

Onboard flash memory — Cheap cameras and cameras secondary to the device's main use (such as a camera phone)
Video Floppy — a 2x2 inch (50 mm × 50 mm) floppy disk used for early analog cameras
PC Card hard drives — early professional cameras (discontinued)
Thermal printer — known only in one model of camera that printed images immediately rather than storing
FP Memory — a 2-4 MB serial flash memory, known from the Mustek/Relisys Dimera low end cameras
Most manufacturers of digital cameras do not provide drivers and software to allow their cameras to work with Linux or other free software. Still, many cameras use the standard USB storage protocol, and are thus easily usable. Other cameras are supported by the gPhoto project.


Batteries

Digital cameras have high power requirements, and over time have become increasingly smaller in size, which has resulted in an ongoing need to develop a battery small enough to fit in the camera and yet able to power it for a reasonable length of time.

Essentially two broad divisions exist in the types of batteries digital cameras use.


Off-the-shelf

The first is batteries that are an established off-the-shelf form factor, most commonly AA, CR2, or CR-V3 batteries, with AAA batteries in a handful of cameras. The CR2 and CR-V3 batteries are lithium based, and intended for single use. They are also commonly seen in camcorders. The AA batteries are far more common; however, the non-rechargeable alkaline batteries are capable of providing enough power for only a very short time in most cameras. Most consumers use AA Nickel metal hydride batteries (NiMH) (see also chargers and batteries) instead, which provide an adequate amount of power and are rechargeable. NIMH batteries do not provide as much power as lithium ion batteries, and they also tend to discharge when not used. They are available in various ampere-hour (Ah) or milli-ampere-hour (mAh) ratings, which affects how long they last in use. Typically mid-range consumer models and some low end cameras use off-the-shelf batteries; only a very few DSLR cameras accept them (for example, Sigma SD10). Rechargeable RCR-V3 lithium-ion batteries are also available as an alternative to non-rechargeable CR-V3 batteries.


Proprietary

The second division is proprietary battery formats. These are built to a manufacturer's custom specifications, and can be either aftermarket replacement parts or OEM. Almost all proprietary batteries are lithium ion. While they only accept a certain number of recharges before the battery life begins degrading (typically up to 500 cycles), they provide considerable performance for their size. A result is that at the two ends of the spectrum both high end professional cameras and low end consumer models tend to use lithium ion batteries.


Autonomous devices

An autonomous device, such as a PictBridge printer, operates without need of a computer. The camera connects to the printer, which then downloads and prints its images. Some DVD recorders and television sets can read memory cards too. Several types of flash card readers also have a TV output capability.

Formats

Common formats for digital camera images are the Joint Photography Experts Group standard (JPEG) and Tagged Image File Format (TIFF).

Many cameras, especially professional or DSLR cameras, support a Raw format. A raw image is the unprocessed set of pixel data directly from the camera's sensor. They are often saved in formats proprietary to each manufacturer, such as NEF for Nikon, CR2 for Canon, and MRW for Minolta. Adobe Systems has released the DNG format, a royalty free raw image format which has been adopted by a few camera manufacturers.


Raw files initially had to be processed in specialized image editing programs, but over time many mainstream editing programs have added support for them, such as Google's Picasa. Editing raw format images allows much more flexibility in settings such as white balance, exposure compensation, color temperature, and so on. In essence raw format allows the photographer make major adjustments without losing image quality that would otherwise require retaking the picture.

Formats for movies are AVI, DV, MPEG, MOV (often containing motion JPEG), WMV, and ASF (basically the same as WMV). Recent formats include MP4, which is based on the QuickTime format and uses newer compression algorithms to allow longer recording times in the same space.

Other formats that are used in cameras but not for pictures are the Design Rule for Camera Format (DCF), an ISO specification for the camera's internal file structure and naming, Digital Print Order Format (DPOF), which dictates what order images are to be printed in and how many copies, and the Exchangeable Image File Format (Exif), which uses metadata tags to document the camera settings and date and time for image files.