By Ctein
The news of the NEX-7 has reignited some smoldering old arguments about diffraction effects with small sensors. My basic advice: you should ignore any and all discussions of how diffraction relates to image quality if they in any way imply that you have to be working diffraction-limited or you won't get acceptable results.
Example: In 35mm film photography, the optimum aperture in terms of peak sharpness for most good lenses was between ƒ/4 and ƒ/5.6. At apertures smaller than that, diffraction dominated. You used larger apertures than that when you needed more light-gathering ability; you used smaller apertures than that when you needed more depth of field (at some sacrifice of peak sharpness). And you certainly did stop down if you needed the additional depth of field.
It's really no more complicated than that. A lens will have one or two apertures where it is "best." Those are by no means its only "good" apertures. Diffraction is a detectable effect, but it's seldom a debilitating effect.
Once upon a time, everybody who was halfway decent at photography knew that. Nowadays I sometimes feel like hardly anyone knows that.
Now look at it from the other side: if you have two cameras with the same format sensor, one of which has 10 megapixels and the other of which has 40, you are going to be "diffraction limited" two stops wider with the 40 megapixel camera than the 10. That's because the 10 MP camera has already thrown away, at the sensor level, any sharpness benefit gained from those two extra stops! Just because you can stop down the 10 MP camera to, oh, say, ƒ/8 before you see any diffraction and you can only stop down the 40 MP camera to ƒ/4 before diffraction shows up, that in no way means that the 10 MP camera is going to produce sharper pictures at ƒ/8 than the 40 MP camera.
And, for more subtle and complicated reasons, more pixels will always be a win in terms of final image sharpness regardless of any other optical constraints (image noise and light sensitivity are another matter).
You can find plenty of people who make much of diffraction limits these days. Most (not all, but most) of them know a lot more about technological theory than practical photography. When in doubt, look at actual pictures and let your eyes be your guide—and don't let technical discussions of diffraction effects be a source of worry to you.
Ctein
Featured Comment by Marc Rochkind: "Glad you made the point about why one would choose a particular aperture. I regularly use my Tamron 90mm macro at ƒ/32, which, of course, is wrong, wrong, wrong. But otherwise there's no depth-of-field with the subject nearly touching the lens, so it's really right, right, right.
Voigtlander Vito B lens taken at ƒ/32
"Bet you can't see the diffraction!"
Featured Comment by JohnMFlores: "Most cameras over $400 these days are talent-limited."
FWIW, Bryan Peterson included a two-page sidebar titled “Diffraction vs. Satisfaction” in the 3rd edition of his book Understanding Exposure:
“Almost weekly, I receive e-mails from students at my online school, as well as from readers of my books, who are ‘concerned about shooting pictures at apertures of f/16 or f/22.’ Seems a couple of those ‘big’ photography forum Web sites have unleashed some really old news that when a lens is set to the smaller apertures, lens diffraction is more noticeable….”
(He goes on to explain why it's dumb to avoid small apertures.)
Posted by: Gary Brown | Tuesday, 30 August 2011 at 03:51 PM
I have a friend who is fond of making those sort of calculations. He has told me several times that I'm not using the best best aperture for a given camera and diffraction limiting must be decreasing sharpness.
But somehow, when I take 2-3 shots of the same static subject at different apertures, pixel peeping shows that a theoretically diffraction limited image is the sharpest.
Using that knowledge in regular photography regularly results in the best outcomes, in my case, at least.
Yes, I know, there are other possible factors, accuracy of focus and DOF key among them. But with enlarged live view focusing, far less of a factor than they used to be. As you say a bit indirectly, system bandwidth is a complex function of many factors.
The point is - try it all out, and find out what actually works best with your equipment. Everything else is just chaff.
I believe your last sentence would be more useful with the first phrase, "When in doubt, look at actual pictures and let your eyes be your guide" changed to "Always look at actual pictures and let your eyes be your guide".
The map is never the territory.
Moose
Posted by: Moose | Tuesday, 30 August 2011 at 04:43 PM
Dear Marc,
Assuming we're talking 35mm format here, stopping down to a real (as opposed to indicated) f/32 only gains you about 10% more depth of field than working at f/22, because diffraction blur is approaching out-of-focus blur. It loses you 30% on-focus sharpness compared to f/22. Rarely worth it.
pax / Ctein
Posted by: ctein | Tuesday, 30 August 2011 at 05:36 PM
Yeah, there is a thread over at rangefinder forum about that very subject. The more practical replies run toward '..don't worry about it, just go take pictures..' It's still not as bad as those people who obsess about a few bits of dust in between lens elements. I know even my Box Tengor exceeds my skill as a photographer.
Posted by: John Robison | Tuesday, 30 August 2011 at 05:40 PM
Great post! And of course there's even more to it.
First, current sensors on interchangeable lens cameras are not close to oversampling a lens that is diffraction-limited at f/2.
Second, the calculations that we're all talking about are for a monochrome sensor! Remember that the specified pixel pitches for real digital cameras are not for monochrome sensors – they are for Beyer RGB arrays.
On a Beyer array, the real pitch for red and blue light is 2x the specified pitch, and for green light it's (roughly) 1.4 x (square root of 2) times the specified pitch. The "pixels" generated by a RAW converter are demosaiced interpolations, which allows retrieval of some, but by no means all, of the nominal resolution (transfer function).
In addition, the specified pitch is only correct for the horizontal or vertical axes. The pitch on a rectangular array along the diagonal is, again, another 1.4x (again, square root of 2) bigger than the specified pitch.
Thus the specified pitch of a real, practical RGB color sensor considerably overstates the actual spatial resolution of the sensor. For practical color photography, real resolution is (depending on the color of the incident light, the axial tilt, the presence of (optical or digital) antialiasing filters, the demosaicing algorithm, and other factors, always much worse than the pixel pitch would suggest.
Third, there can be significant technical advantages to oversampling, not least of which is the fact that you no longer need to correct for aliasing with optical or digital antialiasing filters. There's also the fact that Nyquist sampling doesn't work when the signal is noisy. Spatial averaging of an oversampled signal can, depending on the noise costs, be a good way to compensate for this.
These considerations all argue for the technical merits of pixel arrays that are considerably denser than the ones currently available.
Moreover, with sufficient computational power, it is possible to exceed the Abbe limit if you know the lens's point spread function and can computationally deconvolve. This has been a standard technique in optical microscopy for well over a decade (and in astronomy before that), and it can provide roughly a factor of 2 increase in effective linear resolution.
I'd wager that this is already being done in the iPhone 4's (stunningly good for its size) camera – which has a pixel pitch of ~2 µm (500 px/mm)! Don't think for a moment that Apple and Sony don't know what they're doing with that sensor.
(Note: a FF DSLR with the iPhone 4's 2µm sensor pitch would be 12,000 x 18,000 = 216 Mpix. And an APS-C camera with that pitch would be ~137 Mpix!).
Of course, in practice, any handheld camera used at an exposure of longer than 1/1000 s isn't going to resolve more than about 6 Mpix in any case, regardless of the lens or sensor used. Camera shake and focus errors will absolutely wipe out any resolution above that, fancy lenses and ultra-dense sensors notwithstanding.
Posted by: Semilog | Tuesday, 30 August 2011 at 05:42 PM
This might be a good moment to refer people back to John Williams's classic book, Image Clarity: High Resolution Photography, which underscores the point that the major problems faced by most photographers are NOT lens or sensor resolution, but rather camera movement and focus error.
Posted by: Semilog | Tuesday, 30 August 2011 at 05:54 PM
Thank you, Jesus!!!! Some rationality in the aperture department.
Posted by: Dennis Allshouse | Tuesday, 30 August 2011 at 06:04 PM
But it is also true that to take full advantage of the resolution of your higher Mp density sensor you need higher quality lenses
Posted by: Tim Ashton | Tuesday, 30 August 2011 at 06:37 PM
Counterpoint:
Subject detail lost to diffraction cannot be recovered. It's gone forever. What point is there in selecting an aperture (f/32?) that will provide sufficiently small CoC's at the near and far sharps of the subject space when doing so may force diffraction's Airy disk diameters to far exceed your chosen maximum for CoC diameters?
In other words, why would you want to make the entire print soft, in the quest to give the appearance that your foreground and background are no less sharp than the plane of sharpest focus? To each his own, I suppose.
Owner's of 16 MP digicams like the Canon PowerShot A3300 IS , the Casio Exilim EX-H30, the Nikon Coolpix S6100, the Sony Cybershot DSC-HX9V, or the FujiFilm FinePix JX350 will not be able to secure subject detail at even 5 lp/mm in the final print after suffering the whopping 52.6x enlargement factor required to produce a print measuring only 9.6 by 12.6 inches - UNLESS they avoid stopping down below f/2.8 at all times - due to too many pixels on too small a sensor. If they use f/4.0, subject detail will be limited to 2.5 lp/mm after enlargement. For those cameras which offer it among this grouping, the choice to use f/8.0 would limit subject detail to only 1.25 lp/mm after enlargement. Remember that these figures only get worse if the users of these 16MP tiny-sensored digicams attempt to make even larger prints.
Diffraction is real. It's destructive. The f-Number at which it will begin to inhibit a desired print resolution in lp/mm at an anticipated enlargement factor can be calculated:
f-Number = 1 / desired print resolution / anticipated enlargement factor / 0.00135383
Mike Davis
Posted by: Mike Davis | Tuesday, 30 August 2011 at 06:45 PM
@Mike Davis: the compromise is often worth it, especially when you consider that the detail is often not totally lost -- it's just reduced in contrast (an MTF-50, for example, might become an MTF-25). And that low-contrast fine detail often can be recovered through judicious application of sharpening filters.
For serious optical microscopists who operate at the edge of diffraction most of the time, this is bread-and-butter stuff.
Posted by: Semilog | Tuesday, 30 August 2011 at 08:02 PM
This reminds me of numerous threads on various fora debating whether/which lenses are "good enough" for 24MP.
Aren't they all ?
First off, I've yet to see a case where any lens doesn't show more detail when tested with a higher res sensor than a lower res one, even if the lens isn't "the best". Take lens A and lens B, test both on an 8MP camera. Lens A records detail at a higher lp/mm or lp/picture-height. That suggests lens B is challenged at 8MP, yet test them both again at 18MP and both lenses record detail at a higher frequency.
Second, just because a new camera comes out with a 24MP sensor, do you need to see a sqrt(1.5) increase in recorded detail to justify buying he camera ? As a Sony owner, the A77 offers me the quiet shutter that's my number one reason for upgrading from my A700. It also offers fast AF during LV which can be helpful shooting candids of kids. And that really nice 2-axis level overlay. 24MP just comes along with the territory.
Finally, considering that manufacturers do their best to tempt us with new bodies every few years, but we're supposed to "invest" in lenses, wouldn't you want your lenses to be the limiting factor (when it comes to sharpness) ? Isn't the ideal situation a sensor that maximizes the potential of any lens you put on it (rather than unrealistic expectations that lenses that exploit the potential of some arbitrarily packed sensor, only to be made "obsolete" by the next generation of arbitrarily packed sensors ?)
Maybe that's just me, but I have a pretty good sense of my lenses and don't worry about whether they're "good enough" for a higher res sensor. (They are).
- Dennis
Posted by: Dennis | Tuesday, 30 August 2011 at 08:31 PM
Day One In Photo School 39 Years Ago:
"...the best any lens will every be in terms of sharpness is about 2 stops down from wide open, of course, that doesn't mean you'll get everything you need in focus at that setting..."
Digital Photography 39 Years Later:
"...duh, I don't know..."
My Nikon allegedly even corrects for chromatic abberations, how do I know what it's doing in there with anything?
Posted by: Crabby Umbo | Tuesday, 30 August 2011 at 08:49 PM
OH, BTW, f/8 and be there...
Posted by: Crabby Umbo | Tuesday, 30 August 2011 at 08:50 PM
It's easy to find the sharpest aperture of your favorite lenses — set up the tripod and shoot a newspaper taped to the wall at various apertures, the old newspaper test. Only takes ten minutes.
When I buy a lens that's going to get a lot of use, I want to know the sharpest aperture available, so I do the newspaper test. Most of mine test sharpest in the center at f/8, a couple at f/5.6, but it's always a pretty close call. (I'm shooting a full-frame 12 megapixel dSLR.)
So now with my favorite few lenses, when I'm flexible on DOF, I know that I can shoot at f/5.6 or f/8 and keep diffraction at a minimum. It's only handy now and then, but when I really want that last bit of sharpness, I know where I'll find it.
Posted by: Joe | Tuesday, 30 August 2011 at 09:18 PM
And I thought this was the Online PHOTOGRAPHER! But no, it's the online optical theoretician.
(theoretical humor, BTW)
Posted by: Jim | Tuesday, 30 August 2011 at 09:27 PM
Marc, I went one further in ignoring diffraction, shot same cam with same lens on HIE:
http://www.pbase.com/mononation/image/67620700/medium
Posted by: Dave Elden | Tuesday, 30 August 2011 at 09:48 PM
Dear Tim,
There is a persistent myth out there that most lenses can't produce resolutions sufficient to take advantage of modern sensors. It is indeed a myth. Yes, you need good lenses, but you don't need exalted ones.
Furthermore, it is not a “weak link in the chain” situation. Improving EITHER sensor or lens resolution results in an overall improvements in sharpness. Read this:
"Diffraction In Perspective"
Anyone who says they're not buying more megapixels because their lenses aren't good enough to take advantage of them is either looking for an excuse not to buy a new camera (hardly to be faulted) or owns really crappy lenses.
~~~~~~
Dear semilog,
Well said. I also addressed some of these points a while back in this column:
-- "Why 80 Megapixels Just Won't Be Enough..."
~~~~~~
Dear Jim,
And occasionally (viz. my Column 200) it is the Online Theoretical Physicist! [Grin]
pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
======================================
Posted by: ctein | Tuesday, 30 August 2011 at 11:45 PM
A lot of this comes from spending too much time looking at pixels and not enough time looking at pictures. People worry about loss of pixel quality from going to higher pixel count sensors without considering the benefits to picture quality. It's just that now they're starting to worry about diffraction instead of just noise. A higher pixel sensor will rarely take a (technically) worse picture than a lower pixel count sensor of the same size and technology, but can quite easily take a (technically) better picture when used under favorable conditions.
Posted by: Roger Moore | Wednesday, 31 August 2011 at 12:07 AM
When film changed to digital a lot of photographers were annoyed to find the minimum aperture of their new camera was only f8. They bought cameras with slightly larger sensors that only went to f11. They were used to 50mm lenses on 35mm film cameras having a minimum of f16 or telephoto lenses with a minimum of f22 or f32. View camera operators often use f64 or f128. There is a sweet spot for aperture based on the ratio of aperture circumference to aperture area. The physical size of the aperture on a view camera is much larger than the physical size of the aperture on a consumer digicam thus the ratio is different. The effect is compounded by the amount of magnification required to make a usable print and offset by the viewing distance for said print. You should be able to deconvolve a diffraction limited image and create a uniformly sharp image using software since diffraction follows a known set of rules.
Posted by: bokeh | Wednesday, 31 August 2011 at 05:16 AM
Most cameras over $400 these days are talent-limited.
Posted by: JohnMFlores | Wednesday, 31 August 2011 at 05:13 PM
Well I dont have a double degree and that is probably related to my learning difficulty, but that doesnt stop me seeing.
Where as on my 12Mp body you had to look to the extremities to see the advantage that my 17-55 had over the better consumer lenses, now I am shooting with 16.2Mp the difference that the the "pro" 17-55 makes in comparison is obvious to all.
Despite my lack of formal learning I stand by my original comment:
" that to take advantage of the resolution of your higher Mp density sensor you need higher quality lenses."
Posted by: Tim Ashton | Wednesday, 31 August 2011 at 06:58 PM
Dear Tim,
It is not an either/or. Better lenses make sharper pictures. More pixels make sharper pictures. Just because you see an improvement by improving one does not make improving the other of lesser importance.
Again, please read the columns I recommended.
pax / Ctein
Posted by: ctein | Wednesday, 31 August 2011 at 10:45 PM
Dear JohnMFlores,
Oh, you rock!
pax / Ctein
Posted by: ctein | Wednesday, 31 August 2011 at 10:45 PM
@Tim Ashton: When you go from 12 to 16.2 Mpix, the sensor's linear resolution increases by.... wait for it.... a whopping 15.5%.
Consequently, it is highly unlikely that the increase from 12 to 16.2 Mpix to suddenly unveil previously unseen resolution differences between your 17-55 and other lenses.
If you are seeing such differences, they are more likely attributable to other differences between the sensors: weaker antialiasing filter, better charge separation, superior RAW conversion algorithm for the new sensor, etc. – not increased resoultion due to the number of megapixels.
Posted by: Semilog | Thursday, 01 September 2011 at 10:49 AM
It occurs to me that nobody has addressed the actual question posed in the title of this article.
The answer is "yes" (because every lens and pinhole does).
Now, carry on!
Posted by: David Dyer-Bennet | Thursday, 01 September 2011 at 05:44 PM