By Adam Grossman on September 20, 2013.
As you know, images are composed of pixels. Grayscale images have a single value for each pixel: usually ranging between 0 and 255. 0 represents black, 255 represents white, and the values in between are intermediary shades of gray.
Color images can be thought of as three separate grayscale images smooshed together:
Each image represents the red, green and blue colors respectively, and when combined form all the intermediary colors. These three images are called the “color channels” of the resulting image.
I recently started a project at Dark Sky that abuses this system a bit. Instead of packing color information into the three channels, we pack data representing the weather at different times. In the blue channel we store an image of the temperature, say, at 8am. In the green channel we store the temperature 4 hours later (at noon). And in the red channel goes the temperature 4 hours later still (4pm).
Doing this allows us to transmit the weather at multiple times all at once, improving efficiency a bit. But look at the end result:
Unexpectedly pretty! Notice the rainbow effect? What’s going on here?
Take a look at North America. It appears red because the temperature is rising and therefore has a higher value in the last image, the red channel, than it does in the first, the blue channel. Likewise, Asia is blue because the temperature is falling.
In other words, by packing sequential data into the color channels of the image, you get a great picture of how the temperature is changing! And take a look at the oceans… they’re pretty much gray, which means that their temperature is about equal at all three times (equal amounts of red, green, and blue mixed together form shades of gray). So we can see from this that the ocean temperatures don’t vary much during the day.
Neat huh? It’s not really useful, per se. It’s just a happy little accident that’s fun to share.