Disclaimer: I do not promise that any of this will be useful, nor do I take any responsibility for any headaches induced! :-)
Recommended reading, useful links and resources
For the "sensible" uses (those that produce images without the "meta-") of both Layer and Channel modes there are some tutorials and links already available. Be sure to check them out:
You may want to get started with Photoshop actions:
And last, but not least, I had invaluable use of Doug’s Stepwedge
Meta-this, meta-that and meta-images
My professional background is close to three decades in enterprise IT. In this world, we depend on meta-data, meaning "data about data". A short example will clarify.
Let us say we have a customer in our database. Name = "Doug Nelson"; RegisteredDate = 20040324. These items are regular DATA. We may have millions of customers and consequently millions of occurrences of names and dates.
Then we have another, much smaller database where we have stored this information: Name = any number of characters; RegisteredDate = 8 digits with interpretation YYYYMMDD. This is meta-data, and the information on Name and RegisteredDate is stored only once. It tells us something about the data that we are storing, but it is not the customer data itself; it just tells us the shape and meaning of our real data.
Analogous to this we can come up with the concept of a meta-image. This will be data — in the form of an image — that tells us something about an image. An image is an image is an image? No, not in my world. At least not in this tutorial.
Where did this weird idea originate?
Since I'm on a Mac, neither Neat Image nor Noise Ninja were (yet) available, so I thought, why not try myself? I did, and ... uh ... may safely say that at the time of writing there is still some work remaining... :-) (Or as we say in my profession: It’s 90% completed, translated: No end in sight.)
A very good solution can be found here . Highly recommended.
However, along the road I have come to befriend the Difference and Exclude (fascinating!) Blend modes and the Image>Calculations Add and Subtract options. The former work on entire layers, the latter on single channels, and there are also some other significant differences.
In my Quest For Noise Reduction I have used this image as a test case. (I wish I could take photos as beautiful as this, sigh)
Fig 1. Test photo from official Minolta web site, resized
It is taken from Minolta's official page with sample photos from the (deep breath...) Konica Minolta Dimage A2. I guess the image will disappear from the web sooner or later (at the introduction of the Minolta A3?), but at the time of writing (March 2004) it is still there.
For noise reduction development this image is very suitable, having a very difficult mix of noise reduction challenges. Again, at the time of writing, the 8 Mpix prosumer cameras, using the 2/3" Sony CCD, are just entering the market. One of their main and common characteristics is ... noise (we will all have a good laugh at this in 2010). For testing I have used a small sample from the centre area, so let us present our friendly sample image for the evening. We will see more of it.
Fig 2. Our sample image
Above is our sample image, or "real" image, and below is a meta-image of the same image. As you can see, it resembles the original image a lot, and you might be tempted to think it is just a case of blur, curves and levels. It is not.
Fig 3. A mysterious meta-image
How did I do this? The fine details are unfortunately lost in JPEG compression with the 100kB limitation, so I have enclosed an enlarged crop. All of the meta-image looks like this.
Fig 4. Detail
Getting curious about meta-images? ;-) I hope so!
(As a digression, I couldn't find any official Nikon 8700 or Olympus 8080 sample images, the Sony F828 images were downsized and the Canon Pro1 images were cheating – heavy noise reduction artefacts were only too visible! :-) So kudos to Minolta for honest pictures).
One caveat, sorry. Due to the 100kB limitation, the JPEG compression has caused a considerable pixel death toll. The originals that I have worked on have much more detail.
Basic (naďve) noise reduction technique background
This is the place where it all started for my part. Now there are two traditional ways of attacking noise in Photoshop. Grossly exaggerated, it goes like this
- Filter>Stylize>Find Edges
- Create a mask from the edges
- Blur the rest
- Blur everything
- Create a Difference layer by comparing the blurred image to the original
- Create a Mask from the differences
- Use the blurred rest
There is really vastly more to noise reduction than this; how else can there be a market for dedicated noise reduction software? The second approach gives us a golden opportunity to examine meta-images though!
Our first meta-image
I make two copies of the original, do a Gaussian Blur with radius 3.0 on the topmost, set its Blend mode to Difference, Opacity=100%.
Fig 5. Creating the difference
Fig 6. Visual comparison blurred version and original
The figure above is just an illustration where I have created a mask to hide parts of the blurred image. We see that parts of the original will be lighter, parts will be darker, and first and foremost, the colour will be slightly different.
I then merge the topmost (blurred, with Mode Difference) layer into the middle copy and change the name appropriately. I also add a Levels adjustment layer for later.
Fig 7. Original and Difference layer
Fig 8. Level histogram of difference
You can see the result in the Levels histogram here. All pixel values are way over to the left, meaning most of the image is black or almost black, which is ≤ um ≤ not very difficult to observe. Black is zero, so that means the difference between the two layers was mostly nil or almost nil.
Since "everything is black" I want to see what it "really looks like", so I reduce the Max input level to 10.
Fig 9. Level Adjustment
Lots of light and colour suddenly pop out, more than you can see in this heavily compressed JPEG. Now we have an "image of an image", or what in my vocabulary I have decided to denote a meta-image.
Fig 10. Difference meta-image (with Gaussian Blur)
(The Gaussian Blur is in order to reduce the size under 100kB!)
What we are seeing here is not some variation of the image we started with, although it resembles it quite a lot. In this meta-image, black represents pixels that are identical in the original and the blurred version. Bright represents pixels that differ a lot between them, and various colours represent anything in-between. The patterns and shapes we are seeing are very similar to the original image, but they are not that image.
This meta-image shows us — noise and edges! Anybody who has tried to do Gaussian Blur near an edge will immediately understand why we see the edges clearly in this meta-image.
Why does it show noise? Because if you do a blur on for instance blue sky, the sky becomes uniformly blue; all the "digital dots" disappear. There should not be any dots at all in blue sky, so the difference between the blurred (ideal) version and the digital, imperfect original simply amounts to the noise.
To put it briefly, noise reduction in Photoshop is an endless hunt for the difference between noise and detail, image structure and even areas. And all of this involves meta-images.
What can we do with a meta-image? Well, mainly create masks. For that purpose, meta-images that tend to look like the one we just created are very useful. All the pixels are heaped up at one side (left or right doesn’t matter — there is always Invert) of the histogram, making for a very efficient black/white mask.
But meta-images are not solely confined to noise reduction. In every instance where you want to find out something about an image, something that is not immediately evident to the eye, there may be a use for a meta-image. I will not show an example of this (JPEG destroys it), but try the following.
- Take an image, for instance our sample image.
- Make a layer copy.
- Create a text layer. Write something there.
- Set the opacity of the text layer to 1%.
- Merge it into the layer copy. You cannot see the text.
- Set the Mode to Difference.
- Apply Levels and set Max Input to 2-3-4.
You can now clearly read the "secret" text.
Tools and techniques for creating meta-images
A meta-image is an "image of an image". Yes I know, I have said it before, but all teaching is about repetition, right? To create a meta-image, we must perform an operation on the image. Usually this consists of comparing the image to itself or to some known static entity. To compare an image to itself we usually must create a different version of the image. In the previous example we created a blurred version and compared it to the original.
The meta-image tools fall into two categories.
Category one contains the tools we use to create the alternate version. They can be just about anything we can think of — layer blend modes, filters, image adjustments or even masks or solid colour layers, including black, white and 50% grey, every image adjustments you can think of. Your imagination is the only limit.
Category two contains the comparison tools — Blend modes Difference and Exclusion and Calculations modes Add and Subtract. Although some of these (Difference I suspect) may be well known, I shall examine them all in detail. Exclusion is a different animal from the rest, but extremely interesting, and I will leave it until the end.
For all these four tools I will use the following convention:
A = Top layer or channel
B = Bottom layer or channel
C = Result layer or channel
For the channel operations A, B and C correspond to the channels in the panel, from top to bottom.
Every tool operates on one pixel at a time, comparing a pixel in Channel X to the corresponding pixel in Channel Y. For the Layers, this is done separately for all three or four channels (R, G, B or C, M, Y, K or L, a, b)), in other words, three or four times per layer. For the channel tools this is done only once.
We must also remember layer and channel Opacity. (I will disregard Fill Opacity for layers.) In the formulae for every one of the four tools you may substitute A by "A * Opacity", and you will be ok. When Opacity=100% then A = A*Opacity of course.
Blend mode Difference
We have already used this blend mode. It is a very simple one. Mathematically, we can describe it as
C = |B – A|
In computerese, it is
C = abs(B – A)
And in plain (but verbose — hmmm why do I like this language?) English it goes like this:
Take a pixel from A and compare it to B. If A is greater than B, then the result is A – B. Otherwise it is B – A. If they are equal, it doesn’t matter as the result is zero anyhow. (And zero is black, remember?)
The Calculations panel has all the same options as Layer Blend modes, but instead of Difference and Exclusion it has Add and Subtract.
Now, going back to the histogram we had in Fig 8, we see that the distribution is from 0 and up. There is a slight problem here. Difference mode gives us only just that — "It's not equal, mate". It shows us that there is a difference in colour or brightness between two layers, but it doesn't show us which layer is brighter, greener, bluer, whatever. Often we are not really interested; it is enough to know that there is a difference. But sometimes we would really like to know if that difference is positive or negative.
Fig 11. The Image>Calculations Panel with Mode Add
We have Source 1 and 2 and Result, corresponding to A, B and C. Blending and Opacity apply to A, just as for layers. But flexibility is greater. Source 1 and 2 can be any open document as long as the pixel dimensions are identical. Layer can be any layer, so A and B do not need to be contiguous. Layer can even be "Merged", in other words, the "virtual" layer you see on screen. And Channel can be any channel, including Alpha channels.
Note the Invert check boxes. They allow you to perform the operations on inverted copies of the channel without having to actually invert the channel. So A + B (inverted) becomes A – B of course. Nice!
You can also check the Mask box to get an expanded panel. The selection of a Mask (that applies to A) is just as flexible as that for A and B.
The result can be either a (permanent) new alpha channel or a (temporary) selection. Remember that alpha channels are stored selections. You may also choose to direct the result to a new document. The new document will be in Multichannel mode with one channel.
Fig 12. The Image>Calculations Panel with Mask checked
In many ways, Add and Subtract are easier to understand than many of the other Blend modes. They operate on Channels only, not Layers. That means that instead of relating to three or even four intertwined colour channels (which may often act in surprising ways) we can concentrate on only one greyscale channel at a time. This greyscale channel may of course be one of the RGB, CMYK or Lab channels, but it is still only one channel with values going from 0 to 255 (in 8-bit mode).
So Add takes the greyscale value of each pixel in one channel and adds it to the corresponding pixel in another channel, producing a third result channel. The result of course ends up in the 0-510 range, which is double of the valid range. Ultraviolet pixels?
Subtract takes the second channel and subtracts the values of the first channel, producing a third channel. Values are in the -255 to +255 range. Infrared pixels? We will get to it shortly.
Why does it subtract upside-down?
I was slightly confused by Adobe's "backward thinking" with Subtract until I saw the logic. Because logical it is of course.
For Add, we have
C = A + B
of course. But then for Subtract we have
C = B - A
and not C = A - B
Why on Earth...? But this is of course because this is just another Blend mode, and you always start with the Bottom layer and apply the Top layer to that. Here we are talking channels, but the same principle applies.
So to sum it up, for Add and Subtract
C = B + A
C = B – A
And I guess the plain English explanation is superfluous in this case?
Reminder: A = Source 1 * Opacity
What about the values outside range?
As mentioned above, we may easily end up with values below 0 or above 255. With the traditional Layer Blend modes these values get truncated to 0 and 255, respectively. But not necessarily so with Add/Subtract! There's a little twist that Adobe have intended for this situation.
The Offset and Scale box are specific for Add/Subtract. Try any other mode, and they disappear.
The Offset value can be in the -255 to +255 range, and the Scale value must be between 1 and 2, inclusive. The easiest way to explain it is like this:
C = B + A + Offset
C = B - A + Offset
C = (B + A) / Scale
C = (B - A) / Scale
or finally even
C = ((B + A) / Scale) + Offset
C = ((B - A) / Scale) + Offset
Again in plain English, (remember Opacity applies to A) we first add or subtract A and B. The next thing we do is to divide the result so far by Scale, which is between 1 and 2, inclusive. If Scale=1, we can skip it of course. Finally we add the Offset, which may be negative. So if we have a highlight A pixel of 210, a midrange B pixel of 97, a Scale of 2 (we want the average) and an offset of 150, we end up with
C = ((97 – 210) / 2) + 150
C = (-113/2) + 150
C = -56.5 + 150
C = 93.5
Still confused? An example will hopefully make this much clearer.
Our first meta-image revisited
With some experience (for instance from our Difference example) and a little common sense we will know that the differences will not be very large, very few over 20. So what I want to have is a real "map" of the differences. If there are peaks I want to see them, but I want to see the valleys and gorges too. So I create a Calculation where
A = Blurred image ("flat, levelled")
B = Original image ("mountainous, hilly")
C = Map of "altitude" differences
Since some differences will be negative, where there are valleys in the original, I will need to "lift" the map so it does not "scratch the bottom". The end calculation is really quite obvious:
C = B - A + 128
128 ensures that the map is centred on the greyscale.
But we cannot compare layers directly, like Difference did. What to compare? Two answers:
The Grey channel is the Luminosity channel, and the one you load when Command/Control-clicking the RGB Channel. Selecting that one, we can compare the brightness of the two layers in one operation.
The other answer is to perform the calculation three times, for Red, Green and Blue separately. This will create three new alpha channels, and we can then copy these new channels into the R/G/B channels of an existing layer, using Copy-Paste from the individual alpha channels.
Fig 13. The Calculations panel for Red
I repeat this calculation three times, for Red, Green and Blue. Each operation creates a new channel, called Alpha 1, which I promptly rename
Fig 14. Result of Calculations in Channel palette
In order to see better I run a Levels adjustment, Max input = 20 on the channel.
Fig 15. Levels Histogram
Note that this histogram is more or less symmetrical, as opposed to the Difference histogram in Fig 8, which was only "half".
Fig 16. Detail from meta-image produced by Calculations
Compared to Fig 10 there are now two distinct sides of an edge, one darker and one lighter, instead of both sides identical. The darker side of an edge represents pixels that are darker in the original than in the blurred version and vice versa. If you look closely, you will recognize the shape of Unsharp Mask!
In theory, everything that is above 128 in the new meta-image should also be there in the old one. But the new meta-image will have pixels that are darker than 128, representing darker spots in the original. The first meta-image will show also these darker spots as if they were lighter, since Difference doesn’t care which layer is darker or lighter.
But — what happens if we do not apply that offset in Subtract? The answer is that every negative result in Calculations is truncated to zero, and only the positive ones remain.
So the obvious choice now is to (gasp!) create a meta-meta-image. My head has started spinning long ago, so I hope nobody is disappointed if I just show the result instead of explaining every step? Here it is.
Fig 17. Calculations with zero Offset
What is the difference between the first and the second meta-images? The correct answer is: Those pixels that became lighter when blurred. Here is a small detail. I have used a mask to alternately show the Difference and the Calculations meta-image.
Fig 18. Difference compared to Calculations meta-image
Aside from the difference in edge presentation, we will also note that the areas from the Calculations meta-image have about half the spots of the Difference meta-image.
Grand Finale: Blend Mode Exclusion explained!
So far, I have never been able to find any useful explanation of this mode at all, neither on the Internet nor in any books, including Adobe’s User Guide. Well, boys and girls, here it is. With the former prerequisite that A may represent "A * Opacity", the formula is
C = 127.5 + ((A-127.5) * (B-127.5) / 127.5)
I do not feel tempted to explain this one in plain English. But you get this function if you simultaneously stretch and warp a square elastic sheet of rubber, look below.
I was unable to figure out this formula for many days, but then I used Doug’s Stepwedge, plotted the values in Excel and created a graph of the values plotted. And what was this graph, other than a meta-image of the function?
Fig 19. Graph of Exclusion Blend mode
Here is the graph of the function behind Exclusion. Despite post-traumatic stress syndrome from two years of math in university (I still dream about exams and wake up screaming), I can still appreciate a beautiful function when I see one!
Exclusion is a blend mode "the layer can apply to itself". Ok then, in plain English it goes roughly like this: If A and B are very different, Exclude produces white (per channel). If A and B are both very dark or very bright, Exclude produces black, and if either A or B (or both) is close to grey (128), Exclude produces grey, no matter what the other layer is.
In practice, if you apply Exclude to "itself", you will get a meta-image where highlights and shadows both disappear into the shadows, but midtones are more or less preserved. In other words, Exclude will produce a "raw mask" for either masking or masking out midtones.
But also note that by applying Curves or adjusting Gamma (the middle slider) in Levels, you can make Exclude mask other tones than midtones.
The Mystery meta-image explained
I made a copy of the original layer, did Auto Color and Auto Levels, and then Gaussian Blur with Radius 3.0 on both. The meta-image is the Difference image with levels corrected for visibility.
What is this? I don’t know. As far as I can see, we see a graphical representation of the difference between two complex mathematical formulae (Gaussian Blur) applied to almost similar data. It contains or conveys no meaning that I am aware of, just pure mathematics.
But on the other hand, wherever there is a pattern, isn't there also information?
Beautiful, isn't it?
Now please come up with some creative uses of Exclude and Calculations! Can they perhaps be used to dig out "invisible" detail from faded images? Can they be used for better B&W from colour? Can they be used for "difficult" selections? Can they be used for something I cannot imagine at all? I really hope for the latter!