Written and illustrated by Ctein
I am, for the most part, not a fan of "sharpening for output" nor of upsampling programs. Sharpening for output has, for me, translated into harsh, unrealistic edges and a degradation of what I would (imprecisely) call photographic quality. It also seems to be fundamentally misapplied—the times when I do need sharpening output are when I'm printing very small, not very large, as the printer rendering algorithms tend to suppress very fine, low-contrast detail.
As for upsampling, I gave that a thorough inspection about a dozen years back in these very pages:
It's Bigger, But Is It Better?
It's Bigger, But Is It Better? Part II
It's Bigger, But Is It Better? Part III
The takeaway, if all that is seriously longissimus, non legi [too long, didn't read —Ed.] is that the improvements were modest, nothing was superior under all circumstances, and Photoshop's built-in resizing algorithms worked, on average, as well as anything else. For the heck of it, I checked out the notion that upsampling to a printer's native resolution (360 or 720 PPI in the case of Epson; other manufacturers may be different) produced a higher quality print then letting the printer's driver handle this (which it does behind the scenes when you hand it something of a different PPI). It didn't.
In the years since then third-party programs have improved, but so has Photoshop. I haven't been induced to change my practices.
Except—a few years back, I watched a conversation between Jeff Schewe and Michael Reichmann. Jeff, printing from Lightroom to an Epson 3880 printer, found that files upsampled from their native resolution to 360 or 720 PPI in Lightroom printed out distinctly more sharply than those upsampled by the Epson print driver.
Jeff noted that that didn't mean that another program (or printer) would yield the same results. I use Photoshop, not Lightroom. They are two different products with two different development teams, so they may or may not share the same upsampling code.
But…Jeff is another one of those Great Printing Experts, and 95% of the time we're in agreement (especially around the recommended settings for the Epson 3880 printer to produce the very best quality). I'm not going to ignore his four-year-old results in favor of my twelve, nuh uh. I needed to test out his assertion on my P800 printer, with Photoshop.
Big surprise (not), Jeff turned out to be right. A dozen years ago, I was probably right, but a dozen years is forever in the digital technology world. It pays to occasionally recheck one's beliefs.
Photographs upsampled in Photoshop to my printer's native resolution look distinctly better than those left to the printer's ministrations. Better and cleaner fine detail, more nicely delineated edges.
Not all photographs benefit from this. In particular, for those where some degree of grain/noise is an integral part of the image (sometimes true in my color work, more often true in my black and white infrared), improving the acuteness of a print didn't make it better, as it also enhanced that grain structure, which had been tuned to just where I wanted it. Most of the time, though, I followed Jeff's advice and was much pleased with the improved quality. And I still avoided third-party upsampling programs.
Then Topaz Labs came out with GigaPixel AI, the second of their machine-learning-trained tools. The first had been AI Clear, which had mightily impressed me with its ability to sort out detail from garbage, so I wanted to see what this pony's tricks were like. The samples on their website looked unbelievably good, even taking into account that they were cherry-picked, because advertising.
I downloaded the program and handed it a modest task, taking what was already a very detailed and sharp photograph and upsampling it to my printer's native resolution for a 17x22-inch print (about a 1.5X enlargement) with GigaPixel AI.
GigaPixel AI was effin' amazin'. I'm talking markedly better rendering of fine detail, both at strong edges and in subtle fine gradations, along with finer grain/noise (and less of it). All of this without artifacts or suppression of delicate tonalities! The results looked entirely natural and much better than Photoshop's upsampling. Below are sections from the Photoshop (figure 1) and GigaPixel (figure 2) renderings, at 100% if you click through to the standalone images. [Bear in mind that TOP's TypePad interface softens illustrations somewhat. —Ed.]
I looked at 17x22" prints from the non-upsampled, Photoshop-upsampled and GigaPixel AI-upsampled files, and there was a clear improvement with each step. In fact, the difference between Photoshop and GigaPixel was substantially greater than between no upsampling and Photoshop.
GigaPixel doesn't improve the prints from every single one of my photographs. Only about 75% of them. Sometimes the improvement is nothing more than finer grain/noise, where GigaPixel has subdivided blobby grains into smaller discrete ones.
At other times the improvements do look unbelievable. Take a look at figures 3 (Photoshop) and 4 (GigaPixel AI). Look at the improvement in both fine detail and noise, compared to the original file, which you're not even seeing because I can't figure out how to portray it at the same scale without upsampling. It's about what I'd expect going from a 20-megapixel camera to a 30–35 megapixel one.
GigaPixel AI is a standalone program. It will accept a variety of input formats and will output JPEG, TIFF, or PNG. For fine printing you're going to want the 16-bit TIFF output, of course. I use GigaPixel AI last in my printing workflow, after I've done all my local corrections and pixel fiddling. I duplicate the Photoshop file, save it as a TIFF, and run that through GigaPixel AI. That way I preserve my options for resizing to my printer's native resolution, no matter what size I decide I'm going to print at.
GigaPixel AI occasionally blows up, sometimes globally, sometimes just in one small part of the picture. Not often, but it happens. Machine-learning systems can get very weird notions in their silicon noggins. When it's just some small bit of the photograph that's gone off the rails I resize the Photoshop file to the same dimensions as the GigaPixel TIFF, and paste that TIFF in a new image layer on the file. That makes it easy to do a comparison between the renderings, just switching that layer on and off. Sometimes I throw away the GigaPixel AI rendering or mask it so I can hide the parts where GigaPixel AI went wonky. This is not a frequent occurrence. Most of the time, GigaPixel AI wins with no fiddling.
What's the downside? You may not be able to run GigaPixel AI on your current system. GigaPixel AI demands substantial system requirements. GigaPixel AI performs an insane number of calculations while it does its image analysis. It can commandeer your GPU, if you choose that option, and it'll still take minutes to render an image. I think we're talking millions of calculations per pixel, tens of teraflops total, to upsample a single photograph.
As is typical for Topaz Labs, Gigapixel has a 30-day fully-functional free trial, so you don't have to spend $99.99 to decide if it's worth it to you.
Topaz Labs sends me the stuff for free, but there's no question that I'd pay whatever they charge for this program. It's a four-star wonder.
Ctein
UPDATE Wednesday afternoon: Ctein wrote a reply to several commenters. Because not all the comments he responded to are among the "Featured" ones, I published his reply with the rest of the comments in the Comments section. To get to the full Comments section of this or any post, click on "Comments ([number])" at the bottom of the footer of every post. —Ed.
Ctein, pronounced "kuh-TINE," rhymes with fine, is one of the most experienced and accomplished photo-writers alive. He was TOP's Technical Editor before leaving for a new career as a science fiction novelist. He has written two books of photo-tech, Digital Restoration from Start to Finish and Post-Exposure. This is his 344th column for TOP; older columns can be found under the "Ctein" Category in the right-hand sidebar.
(To see all the comments, click on the "Comments" link below.)
Featured Comments from:
Michael Elenko: "I was sold on GigaPixel AI somewhat out of desperation earlier this spring. I was curating a group exhibition and during the hanging, one print just didn't meet my standards for sharpness. I learned that the print originated from an iPhone 8, and that a friend of the artist attempted to upscale it to meet our print size requirements by using a Photoshop algorithm. I then got involved, tried other Photoshop algorithms as well as Lightroom's approach (which I usually favor), and was not satisfied. I then remembered reading about GigaPixel AI, downloaded a trial, and used it on the image file. The result was revelatory, and just in time for the exhibition opening. I've since purchased the product and am revisiting old-but-good work from 15 years ago."
Wait, what? I don’t recall Ctein mentioning black and white infrared before. I would like to know more about THAT topic and whether it’s IR film based or all digital via a converted camera. I know it is or at least was anathema to Mike, referring to an article he wrote about it many years ago (probably the only article here of which I’ve disagreed). Ctein would have an amazing take on IR I’ll reckon!
Posted by: William Cook | Monday, 14 October 2019 at 09:37 AM
I'm wondering what settings you used for uprezzing in Photoshop. I find that the Preserve Fine Details 2.0 setting often rivals GigaPixel AI for larger upsizing (like 200%). I own GAI, and I keep experimenting with it. But it's much slower, even on a fast machine. And for my images (mostly urban landscapes) it hasn't yet convinced me.
Posted by: David Stock | Monday, 14 October 2019 at 01:10 PM
Ctein, I'm curious when you say upsampling in Photoshop, which upsampling: plain bicubic, or "preserve details?" For files from a Lumix GX7 I found that the PD option produced better results than bicubic, or than the Epson driver. However when I upgraded (after 5.5 years) to a new camera, that changed. The Lumix G9 has 20 MP instead of 16, but more importantly it eliminates the low pass filter and delivers surprisingly more resolution increase than I expected. Also, the preserve details option suddenly didn't work as well—essentially over-sharped the files, which now looked better with bicubic. Also, the (much less than default) Smart Sharpen I had routinely been applying proved to be too much and needed to be cut back to about half the amount for any given size print. In any case, I'll be curious to try GigaPixel AI to see how much can be wrung out of M-4/3s files (if my five year old MacBook pro 16 Gig can run the program).
Posted by: Carl Weese | Monday, 14 October 2019 at 02:30 PM
I trialed and then purchased Gigapixel AI. So, I have put my money where my mouth is. But I caution potential buyers: you've got to really pay attention to use this program and get what I would call good results.
First, Ctein is correct about your system. This program needs processing power, and lots of it. It takes a while on my computers for it to process an image---as in, around 15 minutes. My G.A.S. these days centers around storage and computing power. My cameras are terrific.
Next, you need to check your results very carefully, every square centimeter of the image. In areas of random "detail" and lack thereof, GP AI can create some strange vermiculation type patterning.
Also, dial way back on your output sharpening, both during PP and your output sharpening. GP AI will generate a holy host of halos if you don't.
My test for this has been a work project, photographing on a copy stand the arranged contents of Duchamp's Green Box and White Box for an upcoming exhibition at my museum, the Hirshhorn. The images need to be about 48-ish inches on the long side (the Green Box image contents a bit less, but it was shot with a bunch of blank space--don't ask...). I shot it with my 645Z and a DFA 35 lens, the longest FL I could use with the copy stand, which is about the biggest one made. In GP AI I used 1.5x and 2x. In a quick test print of the Green Box image the results were good, and gave us roughly 1:1 of the whole field.
But I did have to run multiple tests to get the good result. So, yes, worth the price if you are going to do something big, or want to print an older, lower mp shot bigger. Just don't expect a miracle. Note that Ctein thinks it would work for 75% of his images---so, 25% no-go.
Posted by: Tex Andrews | Monday, 14 October 2019 at 03:25 PM
2 corrections to my post: in paragraph 3, no "or" after "random"; in paragraph 4, insert "output" between "your" and "sharpening".
Posted by: Tex Andrews | Monday, 14 October 2019 at 03:29 PM
Ctein, I wonder if I could ask a specific question and maybe the answer would be relevant to some other folks. I have a collection of images that were shot in Cuba back in 2001 using the "new" Nikon D1. Those files are 2.7mp - 300ppi at 4x6 inches. For printing, what do you think is the best option? Do you just stick with 4x6 - 6x9 prints or would a program like this make pleasing larger printing possible? Thanks.
Posted by: JOHN B GILLOOLY | Monday, 14 October 2019 at 05:07 PM
Have you tried one of the best traditional interpolation programs, PhotoZoom Pro (currently at v8)?
https://www.benvista.com
It doesn't use any AI to paste in whatever fragment of image from its knowledge-base the software thinks is appropriate to bulk out an upscaled image. You always know what you're going to get, and the resulting image is based purely on the data in the image: no extra information is added. I regard resizing applications that add information over and above that found in the pixels of the original as being somehow fraudulent.
Posted by: me.yahoo.com/a/BpNafyNjzpPtO7Um4dE.LxxObL1NsA-- | Monday, 14 October 2019 at 08:29 PM
Hey Ctein,
You always drop these little bon mots that make me go, wait a minute! You said the times when I do need sharpening output are when I'm printing very small
Can you define "very small"? 8x10? 4x6? Smaller? Would it be any time the natural resolution would be greater than 360dpi? I have a use for really small, really finely detailed inkjet prints, so I'm very curious.
Thank you for your time! And your interesting article.
Posted by: Trecento | Monday, 14 October 2019 at 08:42 PM
I would suggest that Ctein and anyone who simply doesn't have the computer horse power to run Topaz GP takes a look at Qimage Ultimate. This is Mike Chaney's dedicated printing program, that has been going for 20 years. As well as easy layout etc. it has, for as long as I can remember, automatically resized any print size to the printers native resolution. It also has its own custom designed sharpening algorithms. There was a comparison on DPreview a while back of GP and Qimage and although GP was slightly ahead the differences were certainly not night and day. Mike has since revised his algorithms to improve them. Considering ease of use, speed, computer power, time etc. Qimage might fit more people. Would be good to see Ctein's comparison?
Personally, I use Topaz Sharpen AI but I continue to use Qimage for all of it's other advantages with regard to printing images as the difference in image quality is not enough to change, at least for me.
http://www.ddisoftware.com/qimage-u/
Posted by: Ian Seward | Tuesday, 15 October 2019 at 05:25 AM
Dear William,
I wrote three columns back in 2013 about my infrared work:
My IR-Converted Olympus Pen
My IR-Converted Olympus Pen II
Looking at Lenses in the Infra-Red
I believe (possibly false memory), in fact, that Mike included one of my infrared photographs in a print sale. At least, we talked about doing so. You are correct — he does detest infrared... But he likes a few of mine.
The past several years, approximately half my portfolio work has been black-and-white infrared. It's up on my website. (Well, absent the past two years — I'm getting caught up.)
~~~~
Dear David and Carl,
I tried all kinds of different settings in Photoshop. None of them were close to being as good as GigaPixel AI. Yes, it is slower. I go do something else, like work on email, while it's crunching.
~~~~
Dear Tex,
Yes, as I said, I don't do much in the way of output sharpening — like, essentially none. I've always hated it. Combining multiple sharpening/detail-enhancement routines is always going to be fraught with peril. Proceed at one's own risk.
Your remark about checking results carefully and closely is excellent advice. In fact, anyone who is printing "large" should be doing this routinely no matter what they are doing to their photographs. It is amazing what will slip by on the screen, even a large, high-resolution screen, that becomes blatantly obvious when printed out. I don't know how much paper and money I've saved by closely checking my images BEFORE sending them to the printer.
And, yes, possible local artifacts, which become less likely with each iteration of the software but that is not the same is never-happening. Goes to what I said about pasting GP AI's results into a layer and comparing that to the (similarly resized) original. Layer masks are your friend.
I think sometimes it will create a miracle, but not often. For miracles, you want to Sharpen AI [smile].
A minor clarification — I've been using this program for quite a few months, so it's not that I think it would work for 75% of my images, it's that I KNOW it does. And I suppose if I eliminate the black-and-white infrared from the statistics, it's an improvement more like 90% of the time.
But, as you said, always proceed with caution and skepticism. Do not trust your software, for it is evil and will bite you in the patootie.
~~~~
Dear John,
I have no idea. Why don't you download the FREE demonstration version and try it out on a couple of photographs? Tell us what you find.
~~~~
Dear???
I tried PhotoZoom Pro 8. Not remotely in the same league in terms of image quality, detail recovery, accuracy and sharpening, or noise reduction. All I can say about it is that it's marginally better than Photoshop resampling. Emphasis on the marginal.
I think it charming that you feel that a program based on a heavy application of spline curves, which produced extremely unrealistic looking fine detail in the test images I tried them on, is somehow less "fraudulent" than a AI-based program that produces entirely realistic looking fine detail.
~~~~
Dear Trecento,
Very broad rule of thumb, when it's printing out at more than about 500 ppi, you ought to see nicer super-fine detail by doing a modest amount of Smart Sharpening — something like a half pixel radius, 50%, maybe. But, really, trial and error would be your friend.
~~~~
Dear Ian,
When Mike deigns to start programming for the Mac, which is the system I'm using, I will deign to give Qimage a look.
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Tuesday, 15 October 2019 at 03:33 PM
As I noted before GigaPixel AI is the best upresing program by a bit for many uses. However it has some very significant failures. It is a total no go for night or dusk images - total failure.
It also has a bizarre limit for the output image of 22K in any direction. Topaz tech support said that's because it would take too long to process larger images. Really, let me make that decision, I have a super computer that makes mince meat out of heavy computational tasks. I can produce a stitch 360 image bigger than that limit in about 5 seconds. If I need a bigger high quality file I can simply run it overnight if need be. This limit is why many other giga prone photographers has not purchased this product.
Process intensive tasks are the norm for computational photography. Topaz, let the GPU's run!
Posted by: Robert Harshman | Tuesday, 15 October 2019 at 05:28 PM
At my rate, I'll take a look at the software described here in a few weeks. Right now, I am enjoying the Topaz Sharpening tool discussed in Ctein's last post here - I just got around to downloading a trial version last night and must say I am very impressed. Wow.
Posted by: Ken | Tuesday, 15 October 2019 at 06:57 PM
Ctein, good to see you here again. Mike, thanks for having Ctein in.
Posted by: Chip McDaniel | Tuesday, 15 October 2019 at 08:16 PM
be interested to hear about Ctein's thoughts of focus stacking software.
Posted by: senorito | Tuesday, 15 October 2019 at 09:36 PM
Gosh I wish I had this tool some years ago when it seemed that everyone needed a big-mongous image from me! (Speaking of Jeff Schewe, I used some of his recommendations to turn 8Mp image files into really terrific 12 foot museum vinyl murals back then!)
It strikes me that "GigaPixel AI" is not just interpolating image data in the same mathematical manner as its upsizing predecessors. I suspect that it's literally reconstructing the image almost from the ground up using the original file as reference.
Posted by: Kenneth Tanaka | Wednesday, 16 October 2019 at 10:13 AM
I run GigaPixel AI on an iMac Pro and sometimes, depending on the image, it takes quite a bit of time, but results in most of them are just great. Recently they came up with GigaPixel for video, that doesn't run on your computer, Check it out
https://videoai.topazlabs.com
Posted by: M. Guarini | Wednesday, 16 October 2019 at 03:42 PM
Dear Robert,
Hmmm, I went and tried it on a bunch of night/dusk photos. It seemed to be just fine with my Christmas light photographs (I paid special attention to the areas that were not brightly lit, more night-like). Then I tried it on a bunch of photographs I made up at Lake Tahoe's Emerald Bay at dusk and by full moonlight and it was terrible.
My wild and crazy guess would be that it was well-trained for the former and not for the latter, but I really don't know what's going on under the hood, so don't believe me.
~~~~
Dear senorito,
Oh, that's an easy one — Helicon Focus! George Post put me onto it. It is so much better than Photoshop's focus stacking that it's ridiculous. Runs a full order of magnitude faster and produces immeasurably better quality results. A stack that might take me a full day's work to produce a final print using Photoshop's tool takes me an hour or less with Helicon.
Scroll down through the first two sets of postings on this page : https://ctein.com/newer_work.htm , and look for San Juan Botanical Garden photographs. Every one of those is a Helicon Focus stack.
~~~~
Dear Ken,
I wouldn't even try to hazard a guess what's going on. Machine-learning systems do not think like people do. And there are all sorts of very different machine-learning systems out there.
I could make guesses that would be plausible, and I would be pretty sure they'd be wrong.
- pax \ Ctein
[ Please excuse any word-salad. Dragon Dictate in training! ]
======================================
-- Ctein's Online Gallery. http://ctein.com
-- Digital Restorations. http://photo-repair.com
======================================
Posted by: Ctein | Wednesday, 16 October 2019 at 04:56 PM
Mike,
This has been a problem for at least a couple of years and I believe I’ve written to you about it before. When I click on the “comments (19)” link the article is loaded separately from your blog stream, but after the article there are only the featured comments and this box where I can write my own comment. I haven’t been able to read the non-featured comments for a couple of years, at least. I’m using Safari on a two year old iPad Pro running the latest version of iOS.
Posted by: Charlie Dunton | Thursday, 17 October 2019 at 03:47 PM
Ctein,
The other up-size program of note for years the best until Topaz Giga came along is now called On1 Resize 2019. It was the best for almost a decade. Based on fractal computations, it was a major game changer when released. It's still a bit better at dusk or very dark images, but too struggles to do that well.
Pretty much all of the images I'm sometimes trying to upsize are aerial city spaces. About 90% in NYC. We get clients that want 360's then decide they want to print too. Big problem. Too few pixels from 360 to extract large print stills.
The most amazing transformations I've seen from Giga Pixel is taking a image from a high end 360 camera and upsizing it by 150%. The perceive level of detail increase is mind blowing.
I still stand by my position that Topaz Labs is crazy stupid to limit the output size because ' it takes too long to process." Let me decide that, it's simply that simple.
Posted by: Robert Harshman | Thursday, 17 October 2019 at 07:11 PM