Last updated on July 13th, 2020 at 09:25 pm
The Universe is full of mysteries on its own. But, the major task is to dig deep through these mysteries to gather new pieces of information. Space Agencies are doing a very great job at their part to keep us updated with the newer discoveries about the universe. The major question that arises here from a science enthusiast point of view is how do they get data for interpretations? The answer seems to be quite obvious Satellites, isn’t it? True. But, is the image enough to tell the required data for interpretation? The answer will be ambiguous, still, you will agree that it is difficult from the interpretation point of view. So, how do they interpret so accurately? This is where the system of False-Color Imaging plays a major role.
Natural- and false-color images from NASA’s MESSENGER mission to Mercury show plant-covered land from the Amazon rainforest to North American forests.
What is False-Colour Imaging?
False-colour (or false colour) refers to a group of color rendering methods used to display images in color which were recorded in the visible or non-visible parts of the electromagnetic spectrum. A false-color image is an image that depicts an object in colors that differ from those a photograph (a true-color image) would show.
In addition, variants of false-colour such as pseudocolor, density slicing, and choropleths are used for information visualization of either data gathered by a single grayscale channel or data not depicting parts of the electromagnetic spectrum (e.g. elevation in relief maps or tissue types in magnetic resonance imaging).
False-Color Rendering
In contrast to a true-color image, a false-color image sacrifices natural color rendition in order to ease the detection of features that are not readily discernible otherwise – for example, the use of near-infrared for the detection of vegetation in satellite images. While a false-color image can be created using solely the visual spectrum (e.g. to accentuate color differences), typically some or all data used is from electromagnetic radiation (EM) outside the visual spectrum (e.g. infrared, ultraviolet or X-ray). The choice of spectral bands is governed by the physical properties of the object under investigation.
As the human eye uses three spectral bands, three spectral bands are commonly combined into a false-color image. At least two spectral bands are needed for a false-color encoding, and it is possible to combine more bands into the three visual RGB bands – with the eye’s ability to discern three channels being the limiting factor. In contrast, a “color” image made from one spectral band, or an image made from data consisting of non-EM data (e.g. elevation, temperature, tissue type) is a pseudocolor image (see below).
For true color, the RGB channels (red “R”, green “G” and blue “B”) from the camera are mapped to the corresponding RGB channels of the image, yielding an “RGB→RGB” mapping. For false color, this relationship is changed. The simplest false-color encoding is to take an RGB image in the visible spectrum, but map it differently, e.g. “GBR→RGB”. For traditional false-color satellite images of Earth, an “NRG→RGB” mapping is used, with “N” being the near-infrared spectral band (and the blue spectral band being unused) – this yields the typical “vegetation in red” false-color images.
False color is used (among others) for satellite and space images: Examples are remote sensing satellites, space telescopes (e.g. the Hubble Space Telescope) or space probes (e.g. Cassini-Huygens). Some spacecraft, with rovers (e.g. the Mars Science Laboratory Curiosity) being the most prominent examples, have the ability to capture approximate true-color images as well.
The natural-color image at left shows southeast Florida in red, green and blue light. The false-color image in the middle combines SWIR, NIR and green light. At right Southeast Florida is shown in NIR, red and green light.
Why do we need False-Color Imaging?
- Human eyes could only separate up to 30 shades of gray color, so extracting information from gray-scale color visually is a bit difficult and resulting in less information.
- True color composite is like watching images of what we see in real life, but for extracting detailed information such as the type of vegetation, soil, or rock type becomes a bit tricky if you don’t have experience in the subject matter or sometimes it just not possible.
- Thankfully every object in this universe has a unique response to the entire range of wavelengths. So by recording the information from other wavelengths we could identify characteristic and information of which our eyes couldn’t see before, and to visualize that information we used visible wavelength (red green blue) so we could detect and see them, some combination of color resulting in a new image which really different than we usually see and highlighting new information.
Video Courtesy – ” Seeker “
Further Reading & Reference – https://eijournal.com/print/articles/how-to-interpret-a-false-color-satellite-image
Besides this, you can view our video and blog collections in the Video Section & Blog Section of the website.
Akshat Mishra is currently pursuing his doctoral degree in Physics from Lund University in Sweden. He feels the need to explore the depths of the not-so-dark universe while at the same time watch the quanta in action. Electronic Music is what puts him in the thinking zone.