An image intensifier or image intensifier tube is a vacuum tube device for increasing the intensity of available light in an optical system to allow use under low-light conditions, such as at night, to facilitate visual imaging of low-light processes, such as fluorescence of materials in X-rays or gamma rays (X-ray image intensifier), or for conversion of non-visible light sources, such as near-infrared or short wave infrared to visible. They operate by converting photons of light into electrons, amplifying the electrons (usually with a microchannel plate), and then converting the amplified electrons back into photons for viewing. They are used in devices such as night vision goggles.
Image intensifier tubes (IITs) are optoelectronic devices that allow many devices, such as night vision devices and medical imaging devices, to function. They convert low levels of light from various wavelengths into visible quantities of light at a single wavelength.
Image intensifiers convert low levels of light photons into electrons, amplify those electrons, and then convert the electrons back into photons of light. Photons from a low-light source enter an objective lens which focuses an image into a photocathode. The photocathode releases electrons via the photoelectric effect as the incoming photons hit it. The electrons are accelerated through a high-voltage potential into a microchannel plate (MCP). Each high-energy electron that strikes the MCP causes the release of many electrons from the MCP in a process called secondary cascaded emission. The MCP is made up of thousands of tiny conductive channels, tilted at an angle away from normal to encourage more electron collisions and thus enhance the emission of secondary electrons in a controlled Electron avalanche.
All the electrons move in a straight line due to the high-voltage difference across the plates, which preserves collimation, and where one or two electrons entered, thousands may emerge. A separate (lower) charge differential accelerates the secondary electrons from the MCP until they hit a phosphor screen at the other end of the intensifier, which releases a photon for every electron. The image on the phosphor screen is focused by an eyepiece lens. The amplification occurs at the microchannel plate stage via its secondary cascaded emission. The phosphor is usually green because the human eye is more sensitive to green than other colors and because historically the original material used to produce phosphor screens produced green light (hence the soldiers' nickname 'green TV' for image intensification devices).
The development of image intensifier tubes began during the 20th century, with continuous development since inception.
The idea of an image tube was first proposed by G. Holst and H. De Boer in 1928, in the Netherlands , but early attempts to create one were not successful. It was not until 1934 that Holst, working for Philips, created the first successful infrared converter tube. This tube consisted of a photocathode in proximity to a fluorescent screen. Using a simple lens, an image was focused on the photocathode and a potential difference of several thousand volts was maintained across the tube, causing electrons dislodged from the photocathode by photons to strike the fluorescent screen. This caused the screen to light up with the image of the object focused onto the screen, however the image was non-inverting. With this image converter type tube, it was possible to view infrared light in real time, for the first time.
Development continued in the US as well during the 1930s and mid-1930, the first inverting image intensifier was developed at RCA. This tube used an electrostatic inverter to focus an image from a spherical cathode onto a spherical screen. (The choice of spheres was to reduce off-axial aberrations.) Subsequent development of this technology led directly to the first Generation 0 image intensifiers which were used by the military during World War II to allow vision at night with infrared lighting for both shooting and personal night vision. The first military night vision device were introduced by the German army as early as 1939, developed since 1935. Early night vision devices based on these technologies were used by both sides in World War II. However, the downside of active night vision (when infrared light is used) is that it is quite obvious to anyone else using the technology.
Unlike later technologies, early Generation 0 night vision devices were unable to significantly amplify the available ambient light and so, to be useful, required an infra-red source. These devices used an S1 photocathode or "silver-oxygen-caesium" photocathode, discovered in 1930, which had a sensitivity of around 60 ?A/lm (Microampere per Lumen) and a quantum efficiency of around 1% in the ultraviolet region and around 0.5% in the infrared region. Of note, the S1 photocathode had sensitivity peaks in both the infrared and ultraviolet spectrum and with sensitivity over 950 nm was the only photocathode material that could be used to view infrared light above 950 nm.
Solar blind photocathodes were not of direct military use and are not covered by "generations". Discovered in 1953 by Taft and Apker , they were originally made from cesium telluride. The characteristic of "solar blind" type photocathodes is a response below 280 nm in the ultraviolet spectrum, which is below the wavelength of light that the atmosphere passes through from the sun.
With the discovery of more effective photocathode materials, which increased in both sensitivity and quantum efficiency, it became possible to achieve significant levels of gain over Generation 0 devices. In 1936, the S-11 cathode (cesium-antimony) was discovered by Gorlich, which provided sensitivity of approximately 80 ?A/lm with a quantum efficiency of around 20%; this only included sensitivity in the visible region with a threshold wavelength of approximately 650 nm.
It was not until the development of the bialkali antimonide photocathodes (potassium-cesium-antimony and sodium-potassium-antimony) discovered by A.H. Sommer and his later multialkali photocathode (sodium-potassium-antimony-cesium) S20 photocathode discovered in 1956 by accident, that the tubes had both suitable infra-red sensitivity and visible spectrum amplification to be useful militarily. The S20 photocathode has a sensitivity of around 150 to 200 ?A/lm. The additional sensitivity made these tubes usable with limited light, such as moonlight, while still being suitable for use with low-level infrared illumination.
Although originally experimented with by the Germans in World War Two, it was not until the 1950s that the U.S. began conducting early experiments using multiple tubes in a "cascade", by coupling the output of an inverting tube to the input of another tube, which allowed for increased amplification of the object light being viewed. These experiments worked far better than expected and night vision devices based on these tubes were able to pick up faint starlight and produce a usable image. However, the size of these tubes, at 17 in (43 cm) long and 3.5 in (8.9 cm) in diameter, were too large to be suitable for military use. Known as "cascade" tubes, they provided the capability to produce the first truly passive night vision scopes. With the advent of fiber optic bundles in the 1960s, it was possible to connect smaller tubes together, which allowed for the first true Starlight scopes to be developed in 1964. Many of these tubes were used in the AN/PVS-2 rifle scope, which saw use in Vietnam.
An alternative to the cascade tube explored in the mid 20th century involves optical feedback, with the output of the tube fed back into the input. This scheme has not been used in rifle scopes, but it has been used successfully in lab applications where larger image intensifier assemblies are acceptable.
Second generation image intensifiers use the same multialkali photocathode that the first generation tubes used, however by using thicker layers of the same materials, the S25 photocathode was developed, which provides extended red response and reduced blue response, making it more suitable for military applications. It has a typical sensitivity of around 230 ?A/lm and a higher quantum efficiency than S20 photocathode material. Oxidation of the cesium to cesium oxide in later versions improved the sensitivity in a similar way to third generation photocathodes. The same technology that produced the fiber optic bundles that allowed the creation of cascade tubes, allowed, with a slight change in manufacturing, the production of micro-channel plates, or MCPs. The micro-channel plate is a thin glass wafer with a Nichrome electrode on either side across which a large potential difference of up to 1,000 volts is applied.
The wafer is manufactured from many thousands of individual hollow glass fibers, aligned at a "bias" angle to the axis of the tube. The micro-channel plate fits between the photocathode and screen. Electrons that strike the side of the "micro-channel" as they pass through it elicit secondary electrons, which in turn elicit additional electrons as they too strike the walls, amplifying the signal. By using the MCP with a proximity focused tube, amplifications of up to 30,000 times with a single MCP layer were possible. By increasing the number of layers of MCP, additional amplification to well over 1,000,000 times could be achieved.
Inversion of Generation 2 devices was achieved through one of two different ways. The Inverter tube uses electrostatic inversion, in the same manner as the first generation tubes did, with a MCP included. Proximity focused second generation tubes could also be inverted by using a fiber bundle with a 180 degree twist in it.
While the third generation of tubes were fundamentally the same as the second generation, they possessed two significant differences. Firstly, they used a GaAs--CsO--AlGaAs photocathode, which is more sensitive in the 800 nm-900 nm range than second-generation photocathodes. Secondly, the photocathode exhibits negative electron affinity (NEA), which provides photoelectrons that are excited to the conduction band a free ride to the vacuum band as the Cesium Oxide layer at the edge of the photocathode causes sufficient band-bending. This makes the photocathode very efficient at creating photoelectrons from photons. The Achilles heel of third generation photocathodes, however, is that they are seriously degraded by positive ion poisoning. Due to the high electrostatic field stresses in the tube, and the operation of the MicroChannel Plate, this led to the failure of the photocathode within a short period - as little as 100 hours before photocathode sensitivity dropped below Gen2 levels. To protect the photocathode from positive ions and gases produced by the MCP, they introduced a thin film of sintered aluminium oxide attached to the MCP. The high sensitivity of this photocathode, greater than 900 ?A/lm, allows more effective low light response, though this was offset by the thin film, which typically blocked up to 50% of electrons.
Although not formally recognized under the U.S. generation categories, Super Second Generation or SuperGen was developed in 1989 by Jacques Dupuy and Gerald Wolzak. This technology improved the tri-alkali photocathodes to more than double their sensitivity while also improving the microchannel plate by increasing the open-area ratio to 70% while reducing the noise level. This allowed second generation tubes, which are more economical to manufacture, to achieve comparable results to third generation image intensifier tubes. With sensitivities of the photocathodes approaching 700 uA/lm and extended frequency response to 950 nm, this technology continued to be developed outside of the U.S., notably by Photonis and now forms the basis for most non-US manufactured high-end night vision equipment.
In 1998, the US company Litton developed the filmless image tube. These tubes were originally made for the Omni V contract and resulted in significant interest by the US military. However, the tubes suffered greatly from fragility during testing and, by 2002, the NVESD revoked the fourth generation designation for filmless tubes, at which time they simply became known as Gen III Filmless. These tubes are still produced for specialist uses, such as aviation and special operations; however, they are not used for weapon-mounted purposes. To overcome the ion-poisoning problems, they improved scrubbing techniques during manufacture of the MCP ( the primary source of positive ions in a wafer tube ) and implemented autogating, discovering that a sufficient period of autogating would cause positive ions to be ejected from the photocathode before they could cause photocathode poisoning.
Generation III Filmless technology is still in production and use today, but officially, there is no Generation 4 of image intensifiers.
Also known as Generation 3 Omni VII and Generation 3+, following the issues experienced with generation IV technology, Thin Film technology became the standard for current image intensifier technology. In Thin Film image intensifiers, the thickness of the film is reduced from around 30 Angstrom (standard) to around 10 Angstrom and the photocathode voltage is lowered. This causes fewer electrons to be stopped than with third generation tubes, while providing the benefits of a filmed tube.
Generation 3 Thin Film technology is presently the standard for most image intensifiers used by the US military.
In 2014, European image tube manufacturer PHOTONIS released the first global, open, performance specification; "4G". The specification had four main requirements that an image intensifier tube would have to meet.
There are several common terms used for Image Intensifier tubes.
Electronic Gating (or 'gating') is a means by which an image intensifier tube may be switched ON and OFF in a controlled manner. An electronically gated image intensifier tube functions like a camera shutter, allowing images to pass through when the electronic "gate" is enabled. The gating durations can be very short (nanoseconds or even picoseconds). This makes gated image intensifier tubes ideal candidates for use in research environments where very short duration events must be photographed. As an example, in order to assist engineers in designing more efficient combustion chambers, gated imaging tubes have been used to record very fast events such as the wavefront of burning fuel in an internal combustion engine.
Often gating is used to synchronize imaging tubes to events whose start cannot be controlled or predicted. In such an instance, the gating operation may be synchronized to the start of an event using 'gating electronics', e.g. high-speed digital delay generators. The gating electronics allows a user to specify when the tube will turn on and off relative to the start of an event.
There are many examples of the uses of gated imaging tubes. Because of the combination of the very high speeds at which a gated tube may operate and their light amplification capability, gated tubes can record specific portions of a beam of light. It is possible to capture only the portion of light reflected from a target, when a pulsed beam of light is fired at the target, by controlling the gating parameters. Gated-Pulsed-Active Night Vision (GPANV) devices are another example of an application that uses this technique. GPANV devices can allow a user to see objects of interest that are obscured behind vegetation, foliage, and/or mist. These devices are also useful for locating objects in deep water, where reflections of light off of nearby particles from a continuous light source, such as a high brightness underwater floodlight, would otherwise obscure the image.
Auto-gating is a feature found in many image intensifier tubes manufactured for military purposes after 2006, though it has been around for some time. Autogated tubes gate the image intensifier within so as to control the amount of light that gets through to the microchannel plate. The gating occurs at high frequency and by varying the duty cycle to maintain a constant current draw from the microchannel plate, it is possible to operate the tube during brighter conditions, such as daylight, without damaging the tube or leading to premature failure. Auto-gating of image intensifiers is militarily valuable as it allowed extended operational hours giving enhanced vision during twilight hours while providing better support for soldiers who encounter rapidly changing lighting conditions, such as those assaulting a building.
The sensitivity of an image intensifier tube is measured in microamperes per lumen (?A/lm). It defines how many electrons are produced per quantity of light that falls on the photocathode. This measurement should be made at a specific color temperature, such as "at a colour temperature of 2854 K". The color temperature at which this test is made tends to vary slightly between manufacturers. Additional measurements at specific wavelengths are usually also specified, especially for Gen2 devices, such as at 800 nm and 850 nm (infrared).
Typically, the higher the value, the more sensitive the tube is to light.
More accurately known as limiting resolution, tube resolution is measured in line pairs per millimeter or lp/mm. This is a measure of how many lines of varying intensity (light to dark) can be resolved within a millimeter of screen area. However the limiting resolution itself is a measure of the Modulation Transfer Function. For most tubes, the limiting resolution is defined as the point at which the modulation transfer function becomes three percent or less. The higher the value, the higher the resolution of the tube.
An important consideration, however, is that this is based on the physical screen size in millimeters and is not proportional to the screen size. As such, an 18 mm tube with a resolution of around 64 lp/mm has a higher overall resolution than an 8 mm tube with 72 lp/mm resolution. Resolution is usually measured at the centre and at the edge of the screen and tubes often come with figures for both. Military Specification or milspec tubes only come with a criterion such as "> 64 lp/mm" or "Greater than 64 line pairs/millimeter".
The gain of a tube is typically measured using one of two units. The most common (SI) unit is cd·m-2·lx-1, i.e. candelas per meter squared per lux. The older convention is Fl/Fc (foot-lamberts per foot-candle). This creates issues with comparative gain measurements since neither is a pure ratio, although both are measured as a value of output intensity over input intensity. This creates ambiguity in the marketing of night vision devices as the difference between the two measurements is effectively pi or approximately 3.142x. This means that a gain of 10,000 cd/m²/lx is the same as 31.42 Fl/Fc.
This value, expressed in hours, gives an idea how long a tube typically should last. It's a reasonably common comparison point, however takes many factors into account. The first is that tubes are constantly degrading. This means that over time, the tube will slowly produce less gain than it did when it was new. When the tube gain reaches 50% of its "new" gain level, the tube is considered to have failed, so primarily this reflects this point in a tube's life.
Additional considerations for the tube lifespan are the environment that the tube is being used in and the general level of illumination present in that environment, including bright moonlight and exposure to both artificial lighting and use during dusk/dawn periods, as exposure to brighter light reduces a tube's life significantly.
Also, a MTBF only includes operational hours. It is considered that turning a tube on or off does not contribute to reducing overall lifespan, so many civilians tend to turn their night vision equipment on only when they need to, to make the most of the tube's life. Military users tend to keep equipment on for longer periods of time, typically, the entire time while it is being used with batteries being the primary concern, not tube life.
Typical examples of tube life are:
First Generation: 1000 hrs
Second Generation: 2000 to 2500 hrs
Third Generation: 10000 to 15000 hrs.
Many recent high-end second-generation tubes now have MTBFs approaching 15,000 operational hours.
The modulation transfer function of an image intensifier is a measure of the output amplitude of dark and light lines on the display for a given level of input from lines presented to the photocathode at different resolutions. It is usually given as a percentage at a given frequency (spacing) of light and dark lines. For example, if you look at white and black lines with a MTF of 99% @ 2 lp/mm then the output of the dark and light lines is going to be 99% as dark or light as looking at a black image or a white image. This value decreases for a given increase in resolution also. On the same tube if the MTF at 16 and 32 lp/mm was 50% and 3% then at 16 lp/mm the signal would be only half as bright/dark as the lines were for 2 lp/mm and at 32 lp/mm the image of the lines would be only three percent as bright/dark as the lines were at 2 lp/mm.
Additionally, since the limiting resolution is usually defined as the point at which the MTF is three percent or less, this would also be the maximum resolution of the tube. The MTF is affected by every part of an image intensifier tube's operation and on a complete system is also affected by the quality of the optics involved. Factors that affect the MTF include transition through any fiber plate or glass, at the screen and the photocathode and also through the tube and the microchannel plate itself. The higher the MTF at a given resolution, the better.