« Harald Mante's 'Photography Unplugged' | Main | You Can't Trust Rumors »

Thursday, 11 March 2010



I'm so happy so see you call attention to this. For a while now (years) I've been playing with "pushing" my digital cameras by deliberately under exposing by two and sometimes three stops and then correcting in post processing. I have two cameras that can give beautiful results this way; one is an Olympus C7070 and the other a Pentax *istDS. Almost every other camera I have tried this with fails in the way you describe. The banding makes for ugly results. I would love to see manufacturers pay attention to this. It has become one of my criteria for purchasing a camera, but one that is difficult to discover before making a purchase.

So the question arises, rather than floundering around in one's ignorance, how can one create the conditions in the workflow that optimize the additional pixel density, e.g., between say 6, 12 and 24 mps? (Even Mike, our "Mr. 10 or 12 is enough", has been witnessed recently lusting over the newly announced 40 mp Pentax 645D.

I've made this comment before and, in doing so, asked the question: take a standard head-shot portrait situation; it seems intuitively obvious and all (or most things) being equal, for every square inch of facial real estate, there will be more data captured by a 24 mega-pixel camera than a 6 mega-pixel camera -- in my mind a situation analogous to the difference between 35mm and medium format film. How does one think about maximizing on the additional digital data?

It seems to me that relegating the discussion to one dimension -- print size -- is begging a few questions.

Yes, banding! My Sony a100 had horrible banding and was worthless for this kind of shot. Kind of like a camera that scratches film.

Ah, yes, the joys of trying to find something with a test. My experience suggests that people (even some of those with years at expensive schools) design tests that are more likely to give the results they were looking for. The most useful tests seem to be designed to look for the opposite of a hypothesis and prove by elimination.

Hello Ctein,

Thanks for your input on camera testing. Highly interesting.

Since you are referring to DXOMark in your article, it might be worthwhile to see what they have to say about Low-Light ISO.

Your test indicate that an ISO of 800 and 1600 were used with the E-P1 camera.

But, DXOMark list the Low-Light ISO of the E-P1 at 536. Therefore, using the camera at a higher ISO settting would put the IQ outside the quality range as determined by DXOMark. Or am I wrong assuming this?

DXOMark states that "The Low-Light ISO metric indicates the highest ISO sensitivity to which your camera can be set while maintaining a high quality, low-noise image..."

"A good (equipment) test is really a well-designed scientific experiment that measures the variables on which you want to collect high-quality data." --Ctein

Really spot on! A good test needs a lot of expertise going into play.

It's always a pleasure to read Ctein's deep, sharp comments on photography equipments and techniques. Many thanks!

This is the often not talked about problem of the 5Dii at low ISO. Try to lift shadows at all at low ISO, and the 5Dii shows significant banding.

According to this, the failed test still told me something interesting. That is that when one stops down to f/11, there is no significant advantage to using a 12 MP camera vs. 6MP.

That's a result in itself...


I'd take exception to one comment, that you can't count on the camera for quality shots above iso 800. It' very likely you can get quality shots at ISO 1600 in different situations that make quite nice prints, just not every situation. I use the Olympus 620 and it has similar noise characteristics to the E-P1 (perhaps a touch noisier). I noticed the noise in gray texture-less expanses like fog right away, even at low iso. But I also was able to take iso 1600 shots like this without serious PP involved (only standard Lightroom stuff, lamp light).

FWIW, you probably already tested it, but I've heard that Dfine is among the few utilities that deals decently with banding noise.

Concerning the test mentioned at the articles beginning, I am curious what aperture should have been used?
Ctein says "He'd stopped down the lens all the way to ƒ/11."
The result was, if I understand correctly, a blur circle that was too large.
However, the statement made seems to imply that he should have used a wider aperture.
From my understanding this would have resulted in yet a larger blur circle.
Aside from this slightly puzzling point I agree with the article.
All too often people trust tests without looking at what their lens or camera really produces.

Actually, it seems to me that the first test you mentioned proved quite a lot. If the tester often shoots at f/11, s/he's learned that there is no appreciable difference between 6 and 12 megs in real-life shooting.

This isn't a trivial remark; too many tests and testers assume ideal conditions that have no bearing on actual photography.

Dear Ed,

In essence, you're asking how to make the sharpest photographs.

Get a copy of Image Clarity by John Williams, read and thoroughly grok it.

Your question, though, is an example of how NOT to run an equipment test. A good experiment, of necessity does limit the discussion to a single dimension, hopefully a single variable.

It may not be what you want to know... but it is how you learn things. One step at a time, lots and lots of steps to the next level.

pax / Ctein

"Tests. They don't always tell you what you need to know."

Need to know or want to know? I can see a lot of people being now totally unhappy with their E-p1 even though they probably will never shoot a moon.

Thanks for another interesting dose of thought food. With reference to the reader's tests, I wouldn't necessarily call choosing f/11 an "error". To use a lens, one must select an aperture, and whilst arguments for and against various choices can be made, the final selection must, by definition, be arbitrary. Photographers use f/11 all the time, so the information is useful within its niche. The error would come if the results at f/11 were interpreted as representative of the outcomes at all apertures. Indeed, as f/11 is near the diffraction limit, one would expect any difference to be subtle at best: I think the results are useful as far as they go. I'd like to see results for f/8 and f/5.6.

In the absence of any further methodological criticism (which I'm sure you would not have been coy about bringing to our attention!) I assume that the results of the reader's test stand - that, at f/11, there is no noticeable advantage of 12MP over 6MP. Which begs the question (as long as we assume there is some advantage of 12MP over 6MP at some apertures), which is the ideal aperture to use for a specified sensor size? Assuming APS-C, I'm guesstimating there might be a small discernible difference at f/8, but I would be interested to see this confirmed by experiment.

Dear John and Kevin,

In this case the problem's banding, but many cameras will manifest large scale color 'blotchiness' in darker areas when you crank up the ISO to the limit.

Ultra low frequency visual noise is a problem that experiments don't call out. Testing should take into account that large scale noise, whether its patterned or not, is a lot more distracting and hard to filter out than fine noise.

pax / Ctein

I used to attend a local camera club where we presented prints for 'critique'. The first thing that one of the members of the club would do when presented with a print would be to pull out a magnifying glass and examine the 'grain'. I mean jumping directly to the grain, without actually looking at the image being shown. (We hated that!)

Quite often the favorite images at the club were technically imperfect, but great images captured as best as possible with the available equipment. One of my all time 'winners' was a set taken with a pinhole lens of places where my father had recently been shortly before his death... sharpness would have outright ruined the images, since they demanded the softeness of 'times past'.

Another time at the club there was one of those Gigapixel images displayed, where you could zoom in and see lots of detail in a very boring photo.

Anyway... it's OK to run tests, but if that is the only criteria used when evaluating a photo... well...

(Regarding banding... my car can actually do 150 MPH... that doesn't mean that it would be wise to drive it at that speed... same thing for higher ISO's... it's mice for the setting to be there in an emergency, but it doesn't mean you should actually use it.)

This is why I am pleasantly surprised with my 7d. Although it can't compete with my 1dmkII for creamy tonality, color fidelity, or quality per pixel, the 7d has much nicer noise when tortured. The noise level is "high" (I say "high" because it's miraculous compared to 800 speed film), but when run through the correct RAW workflow it's really fine grained and very analog. No pattern at all. Makes lovely B&W conversions in LR 3 Beta. The mkII doesn't really have a pattern either, it just gets clumpy.
my .02

I'm glad somebody with some pull has weighed in on this. It's a very neglected topic, probably because there's no easy way to quantify the character of noise the way there is to quantify the raw amount of noise (correct me if I'm wrong). Pattern noise such as banding is more objectionable and noticeable than random noise. If reviewers have no way of objectively quantifying the character of noise, then please at least try to subjectively describe it. Banding kills a shot dead like no amount of random noise can.

Ctein raises another point. If you don't know what f stop to use, your are blowing 12 mega pixels worth of usable data. Said another way: the wrong f stop induces distortion that turns your 24 meg camera into a point and shoot.
Important, I'd say.

But to Ed Nixon's point, I have observed a smoothness of tone in a Sony a-900 that I don't have in my D700. Being a black and white low light shooter, I don't care. The D700 is better for me.

"According to this, the failed test still told me something interesting. That is that when one stops down to f/11, there is no significant advantage to using a 12 MP camera vs. 6MP."

Björn (and MartSharm),
I might gently point out that the test didn't tell you that, Ctein told you that--because he has the expertise to understand why the reader's test gave unexpected results. It wasn't something the reader learned from his own test.


Ah, high ISO banding, the red-headed stepchild of noise. Once you know when it shows up, you can choose to only shoot well-lit scenes above the threshold ISO and limit dark scene shots to ISOs below the threshold.

It's a nasty side-effect of high ISO shooting that people have been complaining about for years, yet cameras are still released that display it. But there are cameras that don't, so somebody out there knows how to fix it. We need to track him down and make him talk! :-D

I though a moonbow was a lunar rainbow e.g. a rainbow made apparent by light from the sun reflected off the moon (instead of directly from the sun) and these shots are of the lunar corona.

Dear Andre (and John),

You are reading the DxOMark information correctly, but giving it more weight than you should. DxOMark uses an insanely high standard of quality for noise; definitely better than you would get out of medium format film, possibly fully as good as you would get out of 4 x 5 film. That's not a complaint; I'm something of a perfectionist myself. It's just one needs to keep that in perspective. In fact, I have a photograph in my portfolio of a trumpet on the dinner table (that Mike and John Camp have seen, so they could confirm) that looks essentially noise free, at ISO 640.

In any case, what you really want to be looking at is the curves that plot noise versus ISO for both the Fuji and Olympus cameras. If you do, you will see that the Fuji pretty consistently tracks 1.5-2 stops behind the Olympus. That's what I was getting at when I said that I was really gaining more like a half stop advantage. I could get portfolio-quality work out of the Fuji at ISO 400. I definitely can't get it out of the Olympus, most of the time, at ISO 1600. And 800 is on the ragged edge.

Always by my standards, of course. Your mileage is guaranteed to differ.

In that same vein, John, that is what "can't count on" means, as I understand it. It doesn't mean that the camera won't do it, it means that you can't depend upon it doing it. There will be circumstances under which I will try to get away with ISO 1600 if I need to. But I'm going to be very wary of it.


Dear Wayne,

The wider the aperture, the smaller the effects of diffraction, so the blur circle gets smaller. Down to a point. Wide-open, the lens will usually be limited by residual lens aberrations, not diffraction effects, so the blur circle will be fairly large. As you stop down, and the aberrations diminish, the blur circle gets smaller. At some optimum aperture, it will be a minimum size. For a good prime lens, that's typically in the range of f/5.6. For my Pentax-M f/1.7, it was f/4.8. Regardless, diffraction and residual errors in focus and lens correction guaranteed that the blur circle at f/11 was going to be larger than the pixel size in either of the reader's cameras.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 

Ctein, you make a great point. The GH1 is very much affected by banding, and of course DxOmark's rating of the GH1 doesn't convey that at all.

I will say, however, that Nik's Dfine does an amazing job of removing banding. Nothing I have seen or tried comes close. If you haven't used Dfine's debanding feature, download the trial and give it a try!

No relationship to Nik, just a happy customer.

Dear Steven,

The moonbow's essentially invisible in these tortured-to-death JPEGs. It's there in the original. I turned one of the ISO 800 frames into a gorgeous 11x14 last night.

Took a lot of work, for reasons that might make another interesting column (he said teasingly)....

pax / Ctein

Well I guess if the original tester mainly shoots at f11 and can not tell any difference between the two cameras then his results are just fine, for him.

It's like saying the results of a tire test I did on my car driving in conditions and speeds I normally do are invalid because I didn't put the tires on an indy racer and tried to go 200 mph.

I have a 6 mp Nikon D70s while my wife has a Canon Rebel something or other that has twice the MPs. We both very rarely print larger than 11x14. For all practical purposes there is no difference in the output quality. We are using an Epson 3800 printer right now but in the past have sent our files to a pro lab for output. Again no difference.

In my experience with "banding" noise , my examples always look more like plaid than stripes. Obviously higher ISOs have more noise but with some cameras ISOs that are a power of two times the quietest ISO are quieter than the next lower ISO. N.B. the quietest , lowest, and base ISOs are not necessarily the same thing.
I think this has something to do with how the analog amp is designed, and in the digital realm bit shifting is clean and easy. I'm also curious whether the level of computational load the camera is under at the time of exposure and reading the sensor has anything to do with pattern noise.

As for the patterns, be they banding or plaid, it seems like an autocorrelation algorithm would be useful. http://en.wikipedia.org/wiki/Autocorrelation
(yes I know that autocorrelation and wavelet functions are in wide use , but just as obviously if I can see a regular pattern there is plenty of room for improvement)

A phenomenon that makes pattern noise really jump out at you is to use a quick and dirty down-scaling algorithm on an image. Random noise will be unchanged but pattern noise will often get much worse. One good example is how the half size preview in lightroom is so much noisier than the full size preview.

Ctein said: " No one's noise tests distinguish between visually-random noise and noise that appears as a regular pattern. "

I'm not a scientist (even of the armchair variety) but if it appears as a regular pattern, information science would say that isn't noise, no? I believe it would be quantization error or something else. I'm only bringing this up since the topic itself is trying to (ahem) illuminate a not-well-known subject. And in the case of the banding, I'd look to the A/D conversion stage of the electronics in the camera.


Ok so how do I figure out, without spending lots of money and time, what is the best camera (4/3, apsc, full frame, and how many megapixels) is best to give me really nice portraits? I know that it is a very complicated discussion. I must say that I love the images shown off of the Rollei TLR, but can I do stuff like that with digital, assuming I learn the skills? Unfortunately my resources are limited and I am hoping to learn to do some film work as well but that is another concern.

the idea of "non-random noise" struck me immediately as an oxymoron; i think of noise as randomness inserted into a signal; if not truly random, then at least stochastic (e.g. pink noise in audio)

so maybe it makes sense for the noise measurements to ignore the banding, and if so should there be another measurement process to identify artifacts that will reduce signal quality? just putting this out to provoke a response which will likely set me straight

Megapixels aren't the metric of interest, what you really want is the pixel pitch on the sensor. A little poking around and messing about on the back of an envelope suggests that pixel pitches as small as 1 micron might be useful (f/1.4 at infinity with violet light seems resolve in that general vicinity -- but as Ctein says, diffraction probably isn't the limiter at this point) but no smaller than that. The 5-10 micron pitches we're seeing these days feels about right.

For a given sensor size, smaller pixel pitch translates to more megapixels. The 645D, for instance, has a large sensor, so the 40Mpixels isn't actually that aggressive of a pixel pitch (6 microns).

Finally I will say that there may be scope for using a modest multiple of sensor pixels per "resolvable dot" if you will. I haven't done the math in the 2D case, but there are some interesting things that happen in the audio domain when you oversample like mad -- basically you can use a lousier analog front end (anti-aliasing filters etc in the case of a digital camera) and replace them software. Replace them you must, however! You're not getting 100 megapixels of usable data -- you're getting 12 megapixels (or whatever) out at higher quality. I don't think anyone's going this route, and I'm not sure it's technologically feasible -- there may be issues with sensor design that make it a foregone conclusion that you want Big Pixels on the sensor.

Dear Bjorn,

"According to this, the failed test still told me something interesting. That is that when one stops down to f/11, there is no significant advantage to using a 12 MP camera vs. 6MP."

Not even that. The number of pixels has nothing to do with the optimum aperture to be running such tests at. That's a function of the pixel pitch and how the camera processes the information the sensor collects. You can find six megapixel cameras with pixel pitches anywhere from 2 µ to 8 µ and possibly larger. And every camera processes their signals in their own inimitable style.

All the tester learned was that for his particular cameras, he wasn't going to see much difference in sharpness between them if he worked at f/11. That's not even a slightly surprising conclusion and it certainly doesn't justify the trouble he went through.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 


This is like the difference between accuracy and precision most people don't understand the difference and use them interchangeably.

This ignorance can cause problems.


Dear Patrick and Sporobolus,

Oh yes, there can be non-random noise. In fact truly random noise is very difficult to come by; it's a big area of practical research, as a source of number sequences for simulations and for security systems.

There's also "noise" as opposed to "signal." Noise being considered anything that isn't the signal.

In this case, though, conventionally random electronic noise can lead to banding. For example, imagine a sensor that is being read out row by row and fed to a voltage-controlled amplifier that is suffering from a low level of relatively low-frequency noise in its gain voltage. That will be mainly visible as variations in the amplitude from row to row -- IOW, banding. The lower the frequency of the noise, the wider the bands.

I'm not saying that is what's happening in this camera; I don't know what the electronic chain looks like. I'm presenting it as an example of a way in which apparently uncorrelated electronic noise can turn into a highly correlated visual pattern.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 

Dear Nicolas and Amin,

Dfine is pretty, errmmmm, fine, but it can't really cope with banding on this scale, where the bands run 50-150 pixels wide. It suppresses them and smoothes them out, making them less objectionable. But it doesn't move them from the category of "objectionable" to "non-objectionable." Unfortunately, I don't know of any tools that can deal with banding on that scale.


Dear Eric,

That was not the assertion the tester was making, and I very much doubt he does all his photography at f/11 or smaller apertures or that he understood how his small aperture was degrading sharpness. It just isn't a defendable testing protocol.


Dear Andrew,

Already been there, done that! [ smile ]

You'll want to read this column:

"Why 80 Megapixels Just Won't Be Enough..."

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 

Apologies for being pedantic.




My 'Out of the Blue' by John Naylor says the same. A Moonbow is a Rainbow seen at night.

Still it's only words and everyone knows pictures speak louder!

To quote a sage (source forgotten): "If your experiment works, you must be using the wrong equipment."

"All the tester learned was that for his particular cameras, he wasn't going to see much difference in sharpness between them if he worked at f/11. That's not even a slightly surprising conclusion and it certainly doesn't justify the trouble he went through."

....Well if the tester sold off his 12 meg camera for more lenses to use on the 6meg then I would call his testing a 'job well done'!

A question occurs to me regarding banding and randomness. Is banding truly systematic error? i.e. will it always appear in the same place on the image and at the sam frequency?
2 reasons to ask. First, with random noise I can stack images to reduce the overall noise level (with static subjects at least), and second, if it's predicatble in that way, surely it shouldn't be too difficult to process out.


Thank you for the excellent write up, and the great discussion it has spawned (I credit you as its inspiration and a regular contributor). I am consistently enthralled when you discuss a technical aspect of photography in your column here.

It's a mundane request, but perhaps you're willing anyway: would you perhaps describe a very loose outline of a good process that a hobbyist might use to find the sharpest aperture for any given lens? I see you mentioning this kind of knowledge in passing - "For a good prime lens, that's typically in the range of f/5.6. For my Pentax-M f/1.7, it was f/4.8." - and it's inspired my to find this out on my own lenses.

Anyway, I suspect I could muddle through some test shots at all available apertures for a given lens, of the same subject, in the same setting and lighting conditions and then pixel-peep. Would that be sufficient? Should I try to have the subject at the middle of the area-of-focus? Should I shoot a yardstick laid out horizontally?

Any advice would be greatly appreciated, if it's not too much of an imposition. If it is then please do disregard.

- A Fan Of Your Precision,

I believe this is the test that I did.

Since Ctein was able to explain why the test was flawed, that's great. I may go to the trouble to test again, at f4, f5.6 and f8. All of which appear to be under the diffraction limit.

But if you're dismissing me as measurement geek, I'm really not. This is the first test of anything that I've ever done. (Hey, I don't even have any test charts!)

I only wanted to find out if 6MP or 12MP matters when printing to 13x19.

Practically speaking.

Not photographing test charts in a lab. (Not that there's anything wrong with any of that, if that's your thing. It's just not mine.)

I read a lot of debate about the NEED for more megapixels, one of the main reasons given is that higher megapixels counts on a sensor enable making much larger prints, or higher quality prints, etc.

But I wondered if, even though differences might be visible when pixel-peeping test charts, is it visible in prints?

Why f11 then?

Well part of it probably has to do with the fact that I'm not an optical physicist or engineer. Or scientist. Or professional photographer or equipment test or photography writer for that matter. And I'm not making any claim to any of those titles.

The other reason is that, practically speaking, sometimes f11 is a useful aperture. I often find myself trying find a balance between DOF and minimizing diffraction (admittedly as little as I understand of it.) Sometimes I use f11.

Why didn't I try other apertures? Well, ADHD I suppose. Mainly because I wanted to be photographing instead of testing, and I was out on a nice afternoon after some really cold weather. And testing seemed like such a waste of a nice afternoon.

I'm still not clear how it didn't tell me that under the given conditions, there's not a visible difference in the prints. I mean, looking at the prints you can't see any difference. Not that this doesn't explain WHY there's no difference, but it doesn't make a difference appear between the two prints.

What I'm hearing is that there may be a difference under different conditions.

Or not.

I'm not sure how disproving this result, proves the opposite, that more megapixels in the same format sensor IS better.

Practically speaking.

And it wasn't THAT much trouble. It meant carrying an extra, small camera body and tiny prime to somewhere I was planning to go anyway. And a few minutes of swapping the lens between bodies, and the bodies on the tripod. Which is partly why I'm surprised that no one has performed a test like this and written up the results. David Pogue's tests were the only ones that I had read, and they indicated that megapixels didn't really matter.

So let me get this straight: What you're saying is that it's all well and good to test cameras and lenses, but the test results won't necessarily have any bearing on results in real world photographs? Are you saying that real world photos are often more valid tests (or produce more useful data) than tests performed in labs? Or are you saying both?

For example, I'm under the impression that your "moonbow" photo didn't originate as a test. It was only after you took a close look at the noise signature that you noticed the banding, which, as you stated in your post, would not necessarily be noticeable in a photograph shot under different conditions. This makes me think that the way to determine how a given piece of equipment performs is not only to subject it to an extensive battery of scientific tests, but also to use it extensively under real-world conditions. If this requires producing the occasional photograph rather than a steady stream of test data then that is a price I am willing to pay.

Given dpreview and Imaging Resource's penchant for extensive image quality testing and lengthy reviews, you would think that they would include testing and reporting of this nature (but they don't). (although, Imaging Resource does plot noise vs frequency from its Imatest testing and dpreview does show 100% crops for you to examine yourself... problem is that neither specifically test for nor call out this aspect of noise performance)

Hope they are reading this...

"This makes me think that the way to determine how a given piece of equipment performs is not only to subject it to an extensive battery of scientific tests, but also to use it extensively under real-world conditions."

Arthur Kramer, who I know you know, once wrote on CompuServe--I paraphrase--something like "A Leica lens designer once said 'the only way to test a lens is to use it for a year. Everything else is a shortcut.'"

I contacted Arthur recently to try to get a better provenance on the quote, but he didn't recall who said it or what the exact quote was. Still, it always struck me as telling, even if it's apocryphal.

(For those who don't know the name Arthur Kramer, he's basically the father of scientific lens tests in photography magazines--he was the one who introduced resolution and contrast tables to lens reviews in the old "Modern Photography" magazine, which was later folded into "Popular Photography." Pop's lab tests for lenses and its "SQF factor" and so forth are basically descendants of Arthur's original innovations.)


If banding is predictable, you can play games with a "dark" image (take a picture of the lens cap and subtract off, roughly) which some cameras do automagically.

This eliminates/reduces that part of the banding that is 'the same' from shot to shot. This leaves the banding that, well, is NOT the same shot-to-shot.

For that, if the banding has.. certain statistical properties within itself you can use a process which some have called "deplaiding" to reduce it. Go here: http://davinci.asu.edu/ and search for "plaid" to learn (a little) more. As far as I am aware, davinci is the only tool that attempts to do this sort of thing, but there are surely others -- or will be. Davinci's algorithm, in a test of one (1) image, seems to alter the image in ways that were not 100 percent positive, probably because the plaiding did not have the statistical properties davinci expected.

Anyways, all this stuff is going to add more random noise in, so at best you're trading one kind of noise for another kind of noise that is, potentially, less irritating.

Finally, I'm pretty sure you're gonna get a bunch of quantization noise at sufficiently high ISOs. If the pixel values on the sensor are, for example, only 0 (black) and 1 (damn near black) and you amplify them, you're going to get an image with blacks and whites, and nothing in between. This is the extreme case of quantization noise. Another way to look at it is, your sensor produces say 12 bits of data per pixel -- but at high ISOs that drops. Color fidelity drops into a hole, and you start to get weird artifacts.

I think various manufacturers have different ideas about how hard to push it, and how much software to apply after the fact. Some cameras are willing to let you set absurdly high ISOs, and get really lousy images out (if that's what you prefer). As interesting experiment performed by a friend of mine was to compare raw to JPEG at high ISOs, and the JPEGs (which used the camera manufacturer's software to reduce noise) were surprisingly good, even when compared to well-post-processed raw images. Not suprising, really, since the manufacturer can characterize exactly what the system will do, and craft their software to match it more exactly than general purpose "noise reduction" etc software.

Brian: You need to knock this "best camera for..." concept on the head quick, before it spreads through your brain and really messes you up. It's that dangerous!

For really nice portraits, you need...any DSLR or Micro 4/3 camera, a moderate telephoto prime lens, good lighting (natural or artificial), and good skills at shooting portraits.

That last one is the biggy.

Portraits are not technically particularly challenging as a class (though individual exotic portrait ideas sometimes are). They are among the hardest things to consistently do well, but that relates to the issue of interacting with the subject and getting them to present a good view of themselves to the camera, not to the photo technology. And if your mind is on the technology, you won't do the important part well.

This might be a good start:

You will be most interested in part 6, the resolution testing.

Dear Martin and Andrew,

Regarding the random versus systematic question, I pulled up the first two image files in the folder and got very excited, because the banding was the same in both of them. Then I realized I was comparing the dng file to the JPEG I made from it. Dang!

Looking at the four DIFFERENT photographs I made at ISO 1600, unfortunately most of the banding is random. There is a low-amplitude, ultra-low-frequency component that is systematic, but that isn't the majority of the problem. Too bad. Creating a banding-canceling set of adjustment layers would actually be quite a lot of work, possibly as much as two days of my time, but if I could apply them across the board it would be worth it. That is not to be.

Stacking isn't an especially useful solution, because to make use of that I would have to have the camera mounted on a tripod and make 4-8 identical exposures to stack. If I've got the time and platform to do that, I can just change the ISO to 400!

Andrew, the "deplaiding" algorithm brings up one of the interesting problems trying to design filters for photographic rather than scientific purposes. It is really, really hard to create a filter that does something sophisticated that doesn't introduce aesthetically-annoying artifacts of its own. If one's goal is art rather than science, it's a tough nut to crack. It's a big reason why kernel-deconvolution sharpening algorithms have to be used very carefully when the object is "purty pitchurs."

I can see quantization noise in the palm trees, which are not actually black -- there is some tonal separation there. That was one of the things I had to go to some effort to suppress to make a good print. I wanted to retain tonality in the trunks; I didn't want them to become pure silhouettes. But they looked very weird and harsh and grainy against the relatively grainless sky. I had to do a certain amount of adroit noise reduction and paint it into the trees to get the textures to match up.

There may be some quantization noise in the banding, but when I look at the chroma data I see a sufficient continuum of values that this is not the primary source of the noise.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 

Dear Christian,

There is no easy way to do this.

It's almost impossible to do off-axis except for objects at infinity.

The inclined yardstick is a crude test of focus accuracy, not a test of lens sharpness.

The following rules of thumb will serve you well enough. Almost any decent (or better) lens you buy will be at its sharpest at 2-3 stops below maximum aperture or at f/5.6-f/8, whichever is the widest. Excepting very long telephoto lenses, which might require going to f/11.

For example, that Pentax-M f/1.7 lens was just a hair less sharp at f/4 than at f/4.8, and it was definitely less sharp due to diffraction effects at f/5.6. And, yes, many hours were spent poring over a microscope, analyzing lots and lots of photographs to figure that out. I needed that level of precision for running film tests. No sane human being would.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 

Dear Chris,

Yes, it was your tests I was writing about, but I didn't want to mention you by name, because I could be more blunt that way without making it seem personal or come across as some kind of attack on you. You were meant to be used as an object lesson [ smile ].

Michael Reichman showed conclusively over a half dozen years ago that you could get great prints up to medium size out of a six megapixel camera. For many folks, six megapixels will be quite sufficient to do good work, and that wheel does not have to be re-examined.

As for whether 12 megapixels will give you sharper pictures than six megapixels, on average, yes. But, as you discovered, if you stop down far enough, none of it matters. In the same way that it didn't really matter if you loaded your film camera with Tri-X or Tech Pan if you were photographing everything at "Sunny-16."

I wouldn't go to too much trouble doing tests (and I'm glad to hear you didn't previously), because there are too many confounding factors. This way lies madness. I'm already crazy, so I'm not at risk. You are.

But if you insist upon pursuing this... I would just try a test at f/5.6. It's going to be close enough to optimum aperture for your lens as doesn't matter. And, on average, that pixel count will make a difference, as I said, but there are two major ways that "average" could go wrong.

The first is whether or not your cameras focus accurately. Go back and reread my column "Focusing Follies" of a couple of weeks back. Odds are that neither of your cameras has perfect focus. In which case, if what you're trying to look at is the pixel count question rather than the overall performance of the cameras, you're going to have to make a whole bunch of photos carefully bracketing the focus around the indicated focus point and then pick out the very sharpest one (it's pixel-peeping time).

The second confounding factor is one you can't do anything about, which is that the resolution of a digital camera isn't entirely predictable from its pixel count (and of course it's different for JPEG versus raw). Different makes of camera with the same pixel count can easily vary by 10% from the norm in resolution. Which is not a big difference. But in this case, the variances compound:

In a theoretical world, your 12 megapixel camera will have 1.4 times the resolution of your six megapixel camera. But throw in that 10% variance and what it means is that the six megapixel camera might exhibit a resolution anywhere from 0.9 to 1.1 and the 12 megapixel camera from 1.35 to 1.55. Now suppose that the six megapixel camera is sharper than average and the 12 is less sharp than average. Now you're comparing cameras with resolutions of 1.1 versus 1.35 which is more like a 20% difference than the expected 40% difference, and that may not be very obvious in your prints. Especially when other sources of blur get rolled into it (less-than-perfect lens, less-than-perfect focus, etc.).

Or it might go the other way and there might be a whopping 65% resolution difference between the two cameras. Ya nevva knows.

I hope I have encouraged you to make more photographs that you really want to make... and fewer tests.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 

Dear Shawn,

I did take a look at the dpreview tests before writing this column, and if I knew what I was looking for I could just barely see the banding in their small 100% crops. It wasn't very visible, though. Part of the problem with really low frequency noise like this is that you need to see more of the image.


Dear Gordon,

I wouldn't really put it either of those ways. Let me try this out:

A test gives you information that is broadly applicable about one particular image characteristic that you've chosen to investigate. A real-world photograph gives you information about a whole bunch of different image characteristics for that particular situation (that you may not understand very well).

Indeed, as you observed, this is why all good equipment tests involve both lab tests and field tests. I wouldn't trust reports from anybody who doesn't do both, unless I already know that they are very, very good at this. It's amazing how much information and expert can extract from just one type of test or the other, but it requires a hell of a lot of knowledge to do it.

{Not meaning to brag... oh wait, yes I am... but I could completely test a new film with as few as 10 frame, and definitely no more than one roll (unless the testing involved processing variations, like different developers or push-processing). Honestly, it takes ten years to get that good.}

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 

"Dfine is pretty, errmmmm, fine, but it can't really cope with banding on this scale, where the bands run 50-150 pixels wide."

Ctein, I can't recall seeing banding that wide with the GH1. Maybe that's why I've been so happy with the Dfine results, which have rescued a number of my images.

@Brian: As has been said, there's no real "best camera" for portraiture per se. I would say, however, to be wary of the aspect-ratio. I have serious hangups about throwing away data in any format; I just can't see a potential square crop while I'm composing on a 3:2 dSLR (because I work to fill the frame, whatever shape the frame is) but if I wave my 6x6 gear at the same scene, I may well see it.
Normally around 100mm (35mm equiv) is a reasonable focal-length to use.

If it were me, I'd use a Pentax FA 50/1.4 on a reduced-sensor DSLR.

Love that lens.


Great article.

Dear Folks,

A little late to the party, but the URL below has finished ISO 640 ("Jazz Dinner") and ISO 800 ("Moon Bow") photos from the Olympus Pen.


Jazz Dinner is only slightly massaged from the RAW file. A lot of work went into getting Moon Bow to look right, but very little of that had to do with noise reduction.

pax / Ctein

I've seen banding like that in every single one of the Matsushita 4/3 sensors that I've tried, ever since the platform came out. I've given up on it.

The comments to this entry are closed.