« Plain Old Plugs | Main | From the Archives: 'All Lens Tests are Wrong' »

Wednesday, 11 February 2009

Comments

Excellent analysis Ctein, thank you.

Have to ask though, how would a Foveon sensor fare with your examples and in general?

Interesting post as I have been musing around another application of hundreds-Mpxl sensors. These sensors could be a reality as they are already making sensors with less than 2µm (and with very good quality e.g Canon G10).
Suppose someone makes a full frame sensor with say 270Mpxl. Instead of giving us the full resolution filtered through a bayer array I would divide the sensor into large Pixels with 3x3 of the smaller ones. That would still gives us a 30Mpxl sensor that would be top notch for most of us, and the quality should be equivalent (or better) than current sensors. With 9 sub-pixels in each photographic pixel we could have any combination of RGB array, and we could even include some unfiltered pixels to satisfy those strange people who like to shoot (and print) in black & white. I guess that 3 unfilterd pixels plus 2 each of RGB would be a great all-around sensor.
I think that FF sensors like this could be made the only problem would be the processing power required to process all that data.

Interesting analysis. Similar to one I posted three months ago:

http://www.largeformatphotography.info/forum/showpost.php?p=414012&postcount=132

Wow.
It's almost depressing to contemplate this. After spending (admittedly very enjoyable) years learning how to optimally expose, process and output files from digital cameras, I've gotten to a point where I can produce what I thought was a really good print. I've been very happy with the quality of my best single frame 21 megapixel captures printed up to 20x30" or even bigger. Certainly much better results than I ever got from film & darkroom. But now you're telling me the goalposts may actually be much, much farther downfield?

Sigh. Maybe I'll take up knitting.

"used at optimum aperture"

The fine print...

The camera is 400 megapixel at f2 but (in terms of edge sharpness) something roughly equivalent to 25 megapixel at f11.

Landscapers will have to use lenses with "tilt" to get meaningful depth of field with larger apertures. Will these cameras have some sort of electronic detection/display of sharpness throughout all parts of the frame so that people could adjust the tilt appropriately for maximal depth of field? Or will we have auto-tilt to go with auto-focus?

It will take significantly more than a megapixel number to replace large format as the ultimate in image quality for landscapes.

And what kind of lens will you need once you have 400 megapixels to play with, to actually be able to see with that clarity? I'd think you would have to improve L glass by about 400%. But that's a guess. It might be more. We might have to move back to exceedingly expensive primes... say, $20 or $30K per, or train our eyes to forget a few megapixels. Keeping in mind, of course, that it also takes longer to fix a massive image in Photoshop, if your computer can even handle the display job.

Dear Miguel,

It should behave closer to a monochrome array. But that sensor design has turned out to be such a niche product, that I would be very surprised if it stays around long enough to get scaled up to those pixel counts.

As I indicated in my Christmas column, I don't really expect ultrahigh resolution arrays to stick with the Bayer filter.

-- -- -- --

Dear Paulo,

Yes, once you have Mondo pixels to play with, there are lots of sensor designs that look very attractive. I wouldn't worry about processing power, though. That is continuing to increase faster than pixel count. By the time you can actually buy a 100+ megapixel camera, the CPUs should be able to cope just fine.

-- -- -- --

Dear Geoff,

But, are you really after perfection? I'm not! I never even got as far as routinely using 4 x 5 film, let alone 8 x 10. Medium format (6 x 7 cm) was enough to make me very happy.

If you're very happy with what you're getting now, a 100 megapixel camera may not produce prints that make you noticeably happier. Or at least not enough happier to be worth putting up with the down side (much the same way as I would love 8 x 10 film quality, but am not willing to deal with the equipment).


~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Dear Jeff,

Absolutely.

Some years back, a magazine writer who shall remain nameless demonstrated that if you went to truly heroic lengths and worked under absolutely optimal conditions, you could make prints from 35mm film that most people could not tell from prints from 4 x 5 film, in direct comparison.

It was a demonstration of how far you could take 35mm if you really knew what you were doing (and everything was perfect), but it was not a demonstration of why you didn't need 4 x 5 any longer.

In the same spirit, my post is about how far you can take (in terms of pixel count) full frame digital before there's no point in taking it any further. But that doesn't mean it's preferable to any other format.

At least not right now. But that's a topic for another column I've been thinking about [ teasing smile ].


~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================


Too bad they invented digital. It's a never ending quest to obtain a quality that can never be perfected. This is why I print small. 35mm B&W looks great at 5x7 :)

One must not consider just the sensor in terms of what appears to be resolution in line pairs..Interpolation using post processing makes all the difference..after all we are not merely doing a science experiment and require only real data..photography might be a scientific data mine for some but for an artist..sharp might be synthetic mixed with real data created through interpolation or other computer methodologies. Either this is considered cheating by some or no different than any artistic license is the question that must be individually answered. Having acquired a Canon 5D Mark II and being boggled by the incredibel detail available with this machine, I can also easily enough go into the interpolation mode to see where it takes me and what I can say is that one can get a very large..huge and giant monster image which seems to be very detailed and sharp after the use of software to manipulate the image, interpolate, sharpen, change the shoulder of boundaries to modify the gaussian nature of optics and Bayer additive and subtractive maths. I do not care how I get my image...I naturally would love to have a zillion megapixels to play with but in the end this will simply not make a good image and it might get in the way. What I can say is that with a lot of pixels to play with, one can imagine oneself in the film Blade Runner wherein one image can have so much data that one can crop to one's heart's content and find within one file ten separate final images with little or no compromise. Try taking an image from such a file of 21 megapixels and crop it down to one eighth of the area, sharpen and print and you will see that , sure, it is lacking a certain snap or micro-detail but it is possible that such a necessary abstracting of the image makes it better. Sharpness is not necessarily, in other words, natural as we simply do not see that way. Detail can very easily get in the way. When I read that one needs thus and such to make an 8 by 10 I laugh out loud as I make daily 24 by 36 inch prints and more than a few are made from cropped original files or from collaged images in which each are from cropped files..In other words, one need not think that the bottom line is super detailed high resolution or nothing. What do people say about the quality of the images, never mind their goodness as art? They remark on how very sharp and detailed they are! In other words..it is not about theory nor about a double blind comparison that proves that one can see this or that line pair in comparison to another sample. I do not doubt one can identify sharper and more detailed but that is not how one looks at the world nor at prints. The human brain analyzes and synthesizes to create the image that we SEE and too much data makes no difference in the end.

I have, somewhat belatedly, settled on the fact that what I really like in photography are images that retain a certain impressionistic quality, which are not pin sharp but have good tonal values. Although still using digital, primarily for its convenience, I find that my 35mm shots are the ones that I'm printing and displaying at A3+ size. So for me, at least, am happy to drop out of the megapixel and upgrading race, and sit with my 10mp M8, and M6 for film and stop anguishing about technology advances. Whilst still retaining an interest in what's happening, these cameras are not for me. Maybe others are beginning to feel the same way?

Aren't you assuming perfect sampling here, ignoring Nyquist (who usually spoils the party)? If so, taking into account Nyquist limits, what would be the required pixel count?

Forgive my ignorance, I haven't done signal processing at this detail since university and your answer will probably confirm why I've always prefered thermodynamics.

I agree with Geoff Wittig, except I'm not ready to take up knitting. The prints I'm able to make now from a 10MP camera far exceed the results I was ever able to get with film. My current prints are not just "good enough", but are amazing (to me, anyway), especially when I compare them to prints I made in the 1980's from film. I've finally been able to get off of the "waiting for the next big thing" treadmill, and really enjoy photography again without worrying if there's an even better camera out there somewhere. There probably is, but now it's irrelevant.

If I understood the post at all, basically you are saying that output resolution can be increased by oversampling input resolution (having multiple capture pixels per unit of detail). But... when this happens, output resolution can be further increased by using a higher resolution lens, thus taking you back to the not-enough-pixels-to-oversample state.

I believe we are at this stage right now with cameras such as the D3x and similar high rez DSLRs. Basically, the D3x performs oversampling of the information a lens such as the 17-35mm can provide it. Not surprisingly, what would seem like a theoretical image quality improvement is really perceived as "the sensor is outresolving my old lens", hence the popularity of the brilliant and massive 14-24mm that has such a high resolving power that no current sensor can oversample it.

Of course maybe I did`t understand your post at all or I could be totally wrong, but I am having fun.

I appreciate the explanation, but like some others, for my purposes the resolution of my 5d (original) is just fine for my purposes - (of course in comparison to some future camera I may become dissatisfied). Personally I'd like to see extra pixels real estate used to improve dynamic and tonal range.

Don't know if this is possible, but could photo-sites of varying sensitivity be incorporated on the sensor, and processed so as to provide extended highlight and shadow capability?

Cheers,

Colin

"Emmajay
This is why I print small"

Yep. I print 8"x10"ish, or less.

This may be a dumb question and probably off-topic, but I don't know where else to ask and I am curious. I assume, maybe incorrectly, that the reason we have settled on rectangular arrays of pixels is because of ease of construction of the sensor. Bayers are rectangular, so are Foveons, and some of the Fuji sensors are rectangular but tilted 45 degrees.

Has anyone tried to build a hexagonally packed sensor array? Bees do it. I may talking through my hat because I don't know the details of sensor construction, but it seemed odd that I have never even read about anything like that.

Haven't you folks been watching CSI. It's all done with software. Take a low res surveillance camera with a cheap lens, run the picture through some software and resolve eyelashes in the dark at 100 yards.
Kidding of course. Thanks for the thought provoking post.

Dear Ctein:

Thank you also for the interesting article. There is one large issue I have with it, however, and that is your size for the circle of confusion. You are using a CoC of 3 micrometres, while traditional 35mm film photography typically defined this about 10x larger at 25 or 30 micrometres. Please see http://en.wikipedia.org/wiki/Zeiss_formula for example.

If you revisit your analysis, you will find a much lower pixel count will be needed for a given print size. Based on this, we can already see that with current digital sensors with a pixel pitch of < 10 micrometres, the CoC can be resolved reasonably well. Interestingly, it will probably also show why current digital sensors are considered to be better than film. This will be true if pixel size is less than the size of film grain.

regards


Gijs

Another week goes by, and I still don't have to haul the view cameras to the dump yet.

I think one of the reasons we're happy with less than perfect resolution is that today's cameras easily capture more detail than we can see with our eyes. (I believe Mike commented on this a few weeks ago.)

If a large print of a landscape appears to show the viewer more detail than they would see if they were standing at the camera's point of view, they they conclude the picture is very sharp. And, if the viewer doesn't have an even more detailed print for comparison, then they will be entirely satisfied that they just viewed a very sharp, highly detailed print.

Interesting article, but how is it going to make me a better photographer, or more inclined to take pictures with my existing equipment?

The problem that I see is that regardless of how many megapixels the camera is, and how fast the computer software will be to process and fix the image, the old adage will still remain true: "crap in, crap out".

I think in the end the industry will supply not the number of pixels it things we need, but the number of pixels it can produce given the state of the technology.

Since we're basically talking about solid-state technology, it's not unreasonable to assume that sensor development will follow Moore's law.

If we take a 5DII as an example:
2009 21MP
2011 42MP
2013 84MP
2015 168MP
2018 336MP

So in about 10 years we'll all be using >300MP camera's ..

But I wonder, will we be using the same lenses? As time progresses, sensor density will increase, meaning we'll have 100MP P&S cameras. But I wonder what a typical P&S lens will do @ 100MP??

To me it seems that such a camera will be impossible to sell .. the sensor way-outperforms the lens .

A bit off-topic here... but if sensors can also record distance information on every point, focus and depth of field can be further adjusted via software. Perhaps even create 3-D images.

"The problem that I see is that regardless of how many megapixels the camera is, and how fast the computer software will be to process and fix the image, the old adage will still remain true: "crap in, crap out".

From the Onion.
WARNING, DO NOT CLICK ON THIS LINK IF YOU ARE OFFENDED BY PROFANITY:

http://www.theonion.com/content/video/sony_releases_new_stupid_piece_of

I was thinking about this very topic on my commute this morning - must have been channeling Ctein, a scary thought :-)

I'm a software engineer by trade and often get involved in optimizing computer systems. The art of optimization involves identifying the worst bottlenecks in a complex system and methodically removing one bottleneck at a time until the system achieves the desired performance.

In the case of a camera system, there are several potential bottlenecks: lens, sensor, camera software, computer software, printer and human eye.

I'd be very interested in Ctein's analysis of a typical enthusiast set-up (i.e. Canon 5d Mk II with 25-105 L lens, Adobe raw conversion, Epson printer (e.g. R1900 or 3800) and a typical human eye. Where is the current resolution bottleneck at f8? At f16?

Dear Robert Roaldi
No reason at all. The Fuji super CCD uses octagonal pixels.
I would say that they might even use some non-periodic array of pixels (for instance shaped like Penrose tiles-http://en.wikipedia.org/wiki/Penrose_tiles) with some mapping into a square array — this would make moiré becoma a non-issue and, most importantly, godbye Anti-Alias filter!

The good thing about this sensor is that it will be *much* flatter (and consistently the *same* flatness!) than your 8x10 sheet film! I am pretty sure that the variation in film distance from the lens will also be a limiting factor in the sharpness of *that* system. (By variation I mean ripple, here, not the linear distance.)

Dear Martin,

Too many digits of precision [ smile ]. I'm just constructing a ballpark estimate here. I hope the column made it clear that there's a very wide range of possible answers to this question. The import of it is that the range falls about an order of magnitude higher than what people usually assume.

For a discussion of real-world sampling problems, please see my earlier column "Sampling isn't Simple" (http://tinyurl.com/2mn7ua).

---------------------

Dear Beuler,

You're partly correct. Lens and sensor resolution can keep chasing each other's tails until you hit the physical theoretical limit for a lens ( f/0.5, 3000 lp/mm). But for a full frame camera, you're now into the realm of "who cares?" The improvements in detail are invisible.

My column was directed at the more practical question of what number of pixels would equate with the limits that people could see in the print (and whether there were reasonably-priced lenses and printers capable of delivering that kind of detail).

The question of oversampling in this case has less to do with increasing the resolution as with improving the quality of what is resolved. For lower resolutions, the quality of points and edges matters as much as the fineness of them, to the viewer. the unanswered question is whether that is still true at the limits of perceptible in detail. I don't know.

---------------------

Dear Gijs,

Circle of confusion (a.k.a. blur circle, a.k.a. Airy disc) is a variable, not a fixed value. The number you read is the generally recommended blur circle for "good enough" sharpness (which is why a circle in that size range is typically used in depth of field equations).

The number you report is consistent with the "good enough" resolution I gave at the beginning of my article. It is indeed an order of magnitude smaller than the limits of what the human eye can perceive. That is the point of the column.

Pixel size is not smaller than film grain size, except in cases of extremely small pixels and extremely grainy film. You're confusing grain with noise. In film it takes extremely fine-grain to produce low noise, which is what people admire. In digital, pixel size and pixel noise are different qualities. A camera with a very low pixel count can have extremely good noise qualities; in fact it's easier to achieve with fewer pixels.


~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Dear Scott,

It's a common misconception that 35mm format lenses aren't capable of extremely high resolutions unless they are exotics. Quite a few single focal length lenses, used near optimum aperture, get up into the range I'm talking about. I have a 50 mm lens that sells for $20-$45 used (there are a lot of them out there -- it was a standard kit lens) that goes diffraction-limited at f/4.7.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================


"There is one large issue I have with it, however, and that is your size for the circle of confusion. You are using a CoC of 3 micrometres, while traditional 35mm film photography typically defined this about 10x larger at 25 or 30 micrometres."

Gijs, the CoC you quote is for maximum acceptable blur, Ctein talks about maximum possible sharpness ...

Andreas

In 1992 my father looked at my new computer, and confidently exclaimed "You'll never fill a 170mb hard drive in your lifetime." 17 years later I have 1.25 terabytes of hard drive space sitting on my desk, and it's mostly filled. 400 megapixels? In 2020 we'll laugh at such small numbers.

Maybe this will meet your needs?

Special Forces' 1.86 Gigapixel Flying Spy Camera Sees All.

And at 15 frames per second!

http://blog.wired.com/defense/2009/02/gigapixel-flyin.html

"Don't know if this is possible, but could photo-sites of varying sensitivity be incorporated on the sensor, and processed so as to provide extended highlight and shadow capability?"

There's this...
http://www.dpreview.com/news/0809/08092210fujifilmexr.asp

but right now it's still just marketing material. The only camera using an EXR sensor is a high end point and shoot that has yet to enter the market for review.

I wonder how you came up with the numbers in the 3rd paragraph. According to Wickipedia (or rather Image Processing Handbook by Russ which they quote), for a human eye with excellent acuity, the maximum theoretical resolution would be 50 cycles per degree (or 0.35 mm line pair when viewed from 1 m). If a print is viewed at a normal viewing distance (~30 cm), this translates into approximately 10 lp/mm. This seems to contradict your claim that if you put a 15 lp/mm print next to a 30 lp/mm print, a high percentage of viewers will select the 30 lp/mm print as being sharper. Am I making a mistake somewhere?

It's my experience that the larger the format, the less time you spend thinking about sharpness/resolution/lenses.

35mm & DSLR photographers can't talk enough about technical details. There's always something new to worry / argue about.

MF shooters sweat it less, but a few still stress about "film flatness" and scanner performance.

4x5 shooters sweat it even less, but a few still worry about resolution at the edges of coverage, and how to define "coverage."

8x10 shooters rarely think about sharpness, routinely accepting a bunch of diffraction in order to get the DOF/f-stop they need. Sometimes they worry about rigidity and wind.

The only thing I've heard 11x14/ULF shooters worry about is film holders and film.

20x24 Polaroid users don't worry about anything. They would worry about scheduling billionaires and superstars to photograph, but they have assistants to worry about that for them.

This theory of Ctein's is easy to prove or disprove with a 3R sized print (3.5 x 5 inch) and a sampling of cameras with different pixel counts.

Me thinks that it's a false claim not supported by empirical evidence. The human eye's ability to see an edge is far greater than the human eye's ability to discern further detail.

Create an image file with black and white 1-pixel wide lines. Now print this with no interpolation on the best paper. Can the eye see the lines or does it just look like a picture of gray? It's easy to see ONE line, but you aren't going to make out each individual line when presented in a block of lines.

So the more detail in my print, the better it is?
Interesting discussion, from the scientific perspective, but I like Jim Galli's stuff also. His site is linked on TOP's Main Page and here's a link to some nice stuff (In my opinion)

http://tonopahpictures.0catch.com/TailgatePortraits/TheTailgatePortraits.html

There used to be a photographer named Fred Picker, from Vermont. He promoted Zone VI view cameras, and taught photography. He made very sharp, detailed, boring pictures. (Again, personal opinion).
The "more megapixels" talk is beginning to get old. More dynamic range, however...;>)
Joe

I just wanted to add that just purchased a new pinhole for my Leica MP! Instead of the old "home-made" kind, this one is laser drilled.
It's so sharp (for a pinhole) that I'm looking for a "Red Dot" to stick next to the hole. I think that may increase the resolution even more!

Joe

Dear Emile,

Good question about lenses vs sensors. As a general rule, smaller format lenses can be more easily designed for higher resolutions. And lens designs, especially small lenses, will improve markedly over the next decade. So it's entirely plausible your hypothetical P&S could have a lens good enough to support the sensor.

Or it could just end up being another meaningless megapixel-horsepower race for the marketeers [cynical grin].

-----------

Dear Huw,

Channeling Ctein?! You need an exorcist!

I just don't know enough about the performance of that camera and lens. I can tell you the human eye won't be the bottleneck. Haven't tested those exact models for resolution, but they ought to be good up to around 450-500 ppi (maybe better). So, an 8x10 from that printer could make use of up to circa 5Kx6K lines of real resolved information. I'm sure that's well beyond the Canon (I'd guess you'd need a Bayer array circa 60 Mpixels).

pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================

Dear Joe,

"So the more detail in my print, the better it is?"

Nope, I didn't say that. Nor did anyone else in this thread. Don't know where you read it, but it wasn't here.

pax / Ctein

Dear Radka,

I've written about this many times elsewhere, so I'll try to keep this brief. A good eye can indeed resolve only 8-10 lp/mm at normal close viewing distance. But, even at those limiting resolutions, the viewer can see the difference between a sine wave and a square wave. 10 lp/mm detail with sharp edges will look sharper to a viewer than 10 lp/mm detail with blurry edges. They likely won't be able to tell you why it looks sharper, because it's not about them seeing more detail. They'll just know.

In photographic terms, they're picking up on acutance, not resolution. Traditional darkroom printers actually ran into this with a number of older print materials (early color reversal papers, Ektaflex, dye transfer) that easily exceeded 10 lp/mm resolution, but could not reproduce high enough spatial frequencies to make those just-visible edges SHARP. People would complain the prints didn't look as crisp as other media, even though there was no discernable difference in the amount visually-resolved detail.

Skipping over a whole bunch of intermediate experiments and theory that confirm and quantify this, one gets to the end result that up to about 30 lp/mm, prints will look sharper and sharper; above that, it won't matter.

That's a crude number, I haven't measured it exactly. Could be 25 lp/mm. Could be 40. We're just ballparking this.

Not surprising Wikipedia doesn't mention this. It's not well-known outside of specialized circles.

For a possibly-connected topic, see if researching "vernier acuity" gets you anywhere. My vision guru associates think it's a related phenomenon.

pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================

Dear Ken,

I think my comment to Radka addresses all your points, but if it doesn't feel free to post again (or email me).

Short version: it's not about resolution, it's about acutance. And it ain't theory. It's empirical/experimental observation, decades worth, in fact.

Your 3R-sized experiment won't work, among other reasons, because the printers can't deliver the detail. Running these experiments is NOT a trivial matter. If it were, I'd answer my $64K question (whether supersampling matters at all at the limits of resolution). Not an easy test to do at all.

pax / Ctein

I dunno if I'm buying it. Yes, 30 lp/mm will show more detail than 15 lp/mm, but 15 lp/mm is equivalent to a 760 dpi image. That is pretty durn detailed, and while I think the eye can tell the difference between 15 and 30 lp/mm with a high contrast subject (black and white lines) if you are standing VERY close, back up a foot from the print.

As another commenter pointed out, if an image is rich tonally, then subtle gradations from one pixel to another are especially hard to see.

I've tried this for myself, printing an image at 360 dpi, 540 dpi, and 720 dpi. I can see the quality improvement from 360 to 540, up close, but the 540 to 720 jump isn't noticeable. Step back a foot and the difference isn't there; a 360 dpi print looks fantastic especially if it's a large one (multi-row panorama for me and my dSLRs - Pentax K20D and Sigma SD10/SD14).

Give me a Foveon sensor, with a 1.5x crop factor, and 10 MP photosites, in a Nikon D300 body, and you'll get a camera that will outresolve the Sony A900 as well as most lenses in the real world... and files that are manageable.

Dear Folks,

Realized that there's one point I haven't talked about clearly, which is that I don't know WHY the ultra-high resolution makes a difference to print viewers. It's observable that it does, but what is going on in the human vision system is another matter.

As I mentioned, my vision guru friend thinks it ties into vernier acuity-- the ability of human vision to super-resolve edges. Sounds plausible. But it might be nothing fancier than MTF's. A camera/lens system that resolves, say 50 lp/mm doesn’t have 100% contrast all the way out to 49.99 lp/mm and then suddenly drop to zero. Contrast drops well below unity long before the limiting resolution. So it *might* be true that the reason the prints need around 30 lp/mm worth of detail is nothing more complicated than that it delivers good contrast in the detail at 10 lp/mm and lower (whereas merely resolving 10 lp/mm wouldn't). Also plausible.

And likely there's a Plausible Door Number Three I don't know about.

The WHYness of it doesn't affect the need for ultra-high resolutions, but I think it's an interesting question in its own right. If some reader has insight [sic] into this, I'd love to hear from them.

pax / Ctein
==========================================
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com
==========================================

Ctein,

I set out to somewhat disagree with you Wednesday, but yea, 400MP seems to make sense. It is only silicon, after all. It can be done. But the sensor is only part of the system. It also has to make economical sense, we've seen the megapixel wars in the compact cameras wither and mostly die. I'd expect other things to happen in DSLRs first, before a massive resolution increase happens. I don't think just increasing the resolution will be very useful for the majority of users. High ISO seems to be happening now. 2010 or 2011 may bring a Canon or Nikon that can literally see in the dark. What's next?

What do you think a 400 MP camera needs to be a workable camera? In addition to the things we have today? To resolve that much detail the whole system needs to be in sync.

Interesting subject. I hope we don't get too many pixels otherwise every image will look like an over processed HDR. They look terrible and plastic.
Give me an authentic grainy image any day.

cheers Eric

Dear Obi,

The comparison tests that produce these numbers are done with normal photographs, viewed at normal close distance for an 8x10 print. Not special test subjects nor pathological cases

Your test has more likely simply found the limit of your printer. Best printer I've tested could render 800 ppi worth of detail. Most I test poop out circa 500-550 ppi, maybe a bit more. Epson 2200 poops out circa 450 ppi.

Accurately testing resolution stuff requires understanding and controlling EVERY step in the imaging chain. It's why few people do it.

"Stepping back a foot" violates the boundary conditions of the question. This is not about what's acceptable, no matter how high your standards for acceptable.

This is not directed at you, specifically, but in general:

I'm not making this stuff up. I'm not trying to invent some weird demand based on an unrealistic scenario. I'm just trying to explain to you guys what kind of data can be involved for REAL photography in the REAL world.

Please stop speculating that this is some kind of stacked deck or put-up job. It's not. It's real-world data, collected from real-world experiments.

When I theorize, I tell you I'm doing so, OK? When I present something as a fact, it's real-world-based. Doesn't mean I get my facts right right all the time (I'm human), but it's not conjecture.

pax / Ctein

Dear Eric,

I agree-- I'm hypersensitive to the 'plastic' look. I really hate it.

I've been mulling over the positive role of noise in an image (I mean from the psychophysical point of view, not merely as a subjective aesthetic). There are real-world situations in which a poorer signal/noise ratio produces MORE visual sensory information, not less. (Human auditory processing makes a BIG use of noise.)

Mebbe it'll gel into a column, mebbe not.

pax / Ctein

Dear Pascal,

That's a very good question. Hmmm.

Well, let me start by saying that I don't know that such cameras will be built. Just because that level of quality could be built in doesn't mean there's the market to do so. Hardly anyone needs 8x10 view camera quality. OTOH, it could end up being another horsepower race... or simple gradual increase over time, as Emile suggests.

Overall camera manufacturing tolerances have to be EXTREMELY tight or it's just wasted pixels. That's why I'm not assuming this camera will be cheap.

Image quality... well, I know that we'll get about a two stop improvement in performance over the next ten years. So, in a decade, a good 100 MP camera will have the same noise/light sensitivity, etc. of a good 20-25 MP camera today.

But what happens in the superresolving regime above that?

I don't know. Noise may actually be a GOOD thing. And even if it's not, will a higher level of noise be at all perceptable if the pixel size is sub-resolution?

All good questions deserving of some experiment. Unfortunately, not easy experiments to run. I know how to set them up, but I don't have the time, and I'm not sure I have the resources.

pax / Ctein

Maybe the better resolution pleases our *other* visual system? You know, like those blind folks who can still avoid obstacles without "seeing" them.

I don't know the details, but there appears to be a parallel brain function that can "see" - even though we are not conscious of it.

Three Cteinian asides from the last day, hidden from those who don't read the comments. I'm hoping that the first two will make apparent the sheer wrongness of the last one. :)

1. "There are real-world situations in which a poorer signal/noise ratio produces MORE visual sensory information, not less."

2. "Noise may actually be a GOOD thing. And even if it's not, will a higher level of noise be at all perceptible if the pixel size is sub-resolution?"

3. "Mebbe it'll gel into a column, mebbe not."

Please let it. :)

Ctein, I'm not attacking you with my previous comment (just saying that 720 dpi being insufficient for super detail, IME)and I apologize if that's how it came across.

You may well be right about the printer's resolution being the limit, instead of my eyesight. I've done this using an Epson R300 and a 2200, and seen the same results (obviously with a very apparent cliff since I'm not testing at all sorts of intermediate resolutions). What type of printer accessible to everyday photographers can resolve up to 720 dpi or better?

To me, with my equipment and in the real world, all of this is a balancing act. My camera (lens/sensor) will resolve up to X amount of detail. I can print up to some multiple of X, in dpi, without up-rezzing. I can up-rez to some multiple of X, in dpi, without image degradation being noticeable to the observer. That final size is what I consider to be the limit of an acceptable print, and it varies depending primarily on the lens, sensor, printer, and up-rezzing algorithm (in that order, in my experience). And, as with many other photographers, I am looking for ways to extend the limits of any one factor. Why? Because I agree that, given esthetic merit, a super-detailed print really has an impact on the viewer, and I want to see if I can produce such prints with a dSLR.

So, if there is validity in the assertion of 30 lp/mm prints over 15 lp/mm prints, I'd like to be able to see it with my own eyes.

Dear Obi,

Not to worry, I didn't take it personally.

The sharpest printer I've tested was the Canon i9900. It could render circa 800 ppi worth of fine detail. The Epson 2200 I had would do only a bit better than 400.

Unfortunately, I don't know of a printer you can buy that would let you test 15 vs 30 lp/mm (the Canon would let you test 8 vs 16). Such tests were run using 'continuous tone' darkroom printing materials, which were available in resolving capability ranging from under 20 lp/mm to nearly 150 lp/mm. Tests were made printing real-world negatives at low magnification (to ensure that at least 30 lp/mm worth of detail was actually making it to the print paper) and by contact printing photographic negatives and glass high-resolution targets.

It would be possible to emulate such experiments with existing digital equipment, but it would take some serious work. You'd have to scale everything down to a much lower resolution and then force the viewer to observe the print from a proportionately greater distance. In theory, it's easy. Actually doing it so that there are not experimental artifacts, both mechanical and psychological is, as you're discovering, not so easy. I'm not sure I could set up such digital experimental conditions myself well enough to convince me.

Drop by my house some time and I can probably dig out some test prints which you'd find pretty convincing. But doing the tests yourself would take real work.

And, again reminding people, this is not a discussion about what is 'acceptable', it's about what is so good that making it better makes no detectable (not significant, DETECTABLE) difference.

pax / Ctein

The comments to this entry are closed.

Portals