by Ctein
Photography, as information both cultural and personal, is on the brink of a profound change. Photography has never really been part of the information revolution in a truly integrated way. Computers (and the Web) are fundamentally text-information oriented systems. We can store photos digitally, we can manipulate individual photos, and we can compile them into assemblages, all electronically. But they're still discrete documents, whole unto themselves, and their information content really isn't available except when you cyber-pick them up and look at them. All the Web does is let you look at photos faster, but you can't really search or combine their native visual content the way you can with textual information. The only way around that is via indirect, time-consuming and clunky database and keyword entry by some hapless human being. It's inefficient, it's incomplete, and it's highly fallible.
On my computer, all my writing and correspondence is almost instantly accessible. I didn't have to build cross references or databases; with a gigaflop-plus at my fingertips, I can dig up text and context on just about anything I've written in the past 15 years in a minute or so with some simple search queries. All at my fingertips, and all merged into a single reference if I so desire.
I simply can't do any that with photographs.
Soon I will be able to. Watch this video and be amazed.
I am in awe . I mean it, really serious awe. Seadragon and
Photosynth are going to deeply and profoundly change the import of
photography in the world. And the import of visual information on the
Web.
Take note; this is not pie-in-the-sky technology demonstration and vaporware. These are products that are well advanced on their way to roll-out. Google "Seadragon" and "Photosynth" and you get a bunch of useful links, including being directed to the Microsoft site where you can download some of the beta software and run it now (you'll need a high-performance graphics card and Windows XP SP2 or Vista).
I can't say it's all too cool for words, because I just wrote about it, but it comes pretty close. This stuff is mondo scary amazing. It's too bad it comes from the Evil Empire, but y'know if Microsoft would keep giving me this kind of stuff instead of trying to foist Windows on me, I think I just might be willing to let them take over the world.
Wow, I have to thank you for posting that link. I'm nearly speechless. I have to wonder though, is this somehow related to the new "Street View" on Google maps? I can't wait for the day when I no longer have to use a map to find a restaurant, I can just view the route ahead of time as if I'm walking it and memorize the landmarks!
Posted by: Noah H | Friday, 08 June 2007 at 08:43 AM
Hmmm. It seems like a nice image viewer, but, having watched the video and browsed the site, I'm not getting as excited as they hype clearly wants me to be. I mean, it's an image viewer with smooth zooming.
I think flickr and related sites with tagging and social-network based organization are a much bigger information revolution for photography.
Posted by: Matthew Miller | Friday, 08 June 2007 at 09:08 AM
It's a powerful idea, and it's going to drive the need for more powerful graphics cards and higher RAM. I still don't use Metadata or tagging to any great extent, so perhaps I'm not as impressed as some, but if it has the ability to make Google image searches more meaningful, I'm for it.
Posted by: Scott | Friday, 08 June 2007 at 09:33 AM
I think the posters so far are missing the potential of this kind of thing - seeing this just inside the existing boundaries. There is great potential here and in other developing areas to expand our visual photographic concepts beyond what we are used to and what we expect.
Ctein - combine it with Jeff Han's multi-touch screen and you can have even more fun...
http://www.ted.com/index.php/talks/view/id/65
Posted by: tim atherton | Friday, 08 June 2007 at 10:07 AM
I was just discussing Photosynth this morning at work. We store hundreds of thousands of commercial images for customers of ours and having ways to navigate it easily is very important. I personally have over 30,000 photos on my computer and navigating them is a pain. Tagging and searching is great, but it's hard to get people to do well (myself included). Nothing beats an intuitive and lightning fast visual interface.
Now when you think about combining Seadragon/Photosynth and Microsoft's new surface computing technology, it's easy to get really excited about where this is going.
Check out this surface computing demo:
http://link.brightcove.com/services/link/bcpid271552687/bctid933742930
Posted by: Chris Norris | Friday, 08 June 2007 at 10:31 AM
Full disclosure: I work for the Borg, although not in this team. The point is that this is another layer on top of the social networking/folksonomy stuff -- that I can look at one of Ctein's photos, which might link me to someone else's photo of the same area in a different season, which might link me to an article in wikipedia on the species of tree in the photo.
Said another way -- you and I may not have used the same tags to describe a photo of the same area, but because photosynth recognizes that they are photos of the same thing, someone looking at my photo can get some of your tagging goodness, and vice versa.
Posted by: David Adam Edelstein | Friday, 08 June 2007 at 11:20 AM
Good post.
I guess from the oohs and aahs at the conference that this is new technology.
I like the thought of reading the Guardian this way. I wonder whether one can search though? As the content is 'images', I wonder whether one could I search for a particlar word or phrase?
On the question of accuracy, the mapping of Notre Dame from many images is one thing - but I would like to see mapping of 'things' about which we believe we have accurate 'maps' in our heads - such as famous faces.
It would be interesting to see what a composite of George W Bush is like for example. And what about 'things' that change over time - again, thinking of people's faces?
Perhaps one could slice groups of images by capture date and build a movie of them getting older?
David
Posted by: David Bennett | Friday, 08 June 2007 at 12:04 PM
I don't think this is especially original, at least the trial Photosynth application. It's shocking, as so many applications are these days. It could also be a lot more shocking but hardly original if, for example, every view was a pair of two displaced frames and the result could be watched through stereoscopic goggles, that would be impressive (the concept is quite old though).
Posted by: Max | Friday, 08 June 2007 at 01:04 PM
I have mixed feelings about this. I see the great potential in this when it comes to linking photos from a lot of different sources - the social networking / folksonomy angle. I also admire the speed of zooming and the way a 3D space can be constructed automagically.
However, something deep inside me reacts to the beautiful Piazza San Marco represented as a cloud of white dots. This, to me, is a sort of radicalized version of Flickr: as a tool for building communities and relationships its nice, but from the point of view of aesthetics it sucks. The user interface is ugly and the whole concept encourages snapshot photography – the number of pictures becomes the wow-factor, more than the quality of each individual photo.
But that said, I'm still a bit fascinated...
Posted by: Lars K. Christensen | Friday, 08 June 2007 at 02:38 PM
It's a big deal, if it works. The problem with tagging is that there's no incentive for most people who post images to tag them: almost all of the benefit goes to the searcher rather than the tagger. So almost no images on the Internet are usefully tagged, so image search is too ineffective to be useful for much besides entertainment and for use other than in specialized communities like Flickr.
Photosynth, if it works, blows all of that away. Flickr becomes obsolete. Maybe a lot of stock-photo sites become obsolete. High-quality photos on obscure personal websites and blogs become more valuable because now anyone can search for them.
It will be interesting to learn how effective this software really is. If it's effective, a lot of things we take for granted will change.
Even if it's not as good as the hype, it seems likely that someone will develop effective software of this type eventually.
Posted by: Jonathan | Friday, 08 June 2007 at 03:24 PM
Finally a visual representation of the semantic web idea almost anyone can understand. It is quite simply impressive even though it only hints at what else can be done.
Posted by: Jernej | Friday, 08 June 2007 at 04:05 PM
Maybe a lot of stock-photo sites become obsolete. High-quality photos on obscure personal websites and blogs become more valuable because now anyone can search for them.
Or from a different angle, anyone can search for them and lift them. Photostealing is already enough of a problem on the web -- this is just going to make it worse.
Posted by: Clint | Friday, 08 June 2007 at 05:46 PM
"Or from a different angle, anyone can search for them and lift them. Photostealing is already enough of a problem on the web -- this is just going to make it worse."
Isn't that a sort of glass-half-empty, man-the-barricades, status quo way of looking at it?
The old models don't and won't work any more (they don't already when Bruce Davidson loses a major assignment to a girl on Flickr), so why stick with them instead of finding new models?
Posted by: tim atherton | Friday, 08 June 2007 at 10:47 PM
Wait until what's on *that* video gets matched up with this:
http://link.brightcove.com/services/player/bcpid932579976?bclid=932553050&bctid=933742930
Posted by: Bill Millios | Saturday, 09 June 2007 at 01:24 PM
I wonder if this will become an excuse for our government to censor our photos however they please because a shot may contribute to an elucidation of a supposed security risk.
Sorry - your shot of Gramma must disappear because there is a secret military installation 1200 yards away over her left shoulder.
Posted by: gingerbaker | Saturday, 09 June 2007 at 02:45 PM
Hi,
yes that is pretty impressive! I saw a talk by Richard Hartly at the ICPR in August 2006 who demoed the system (I think he even used the Notre Dame dataset) ... it was an academic project then, so I wonder whether they bought it or put it together independently. I couldn't find any link between them, and they have some pretty smart people at coorporate resaerch.
Well, anyway I think that puts a lot of pressure on the google/mac people, since there are decades of research by lots of groups in this technology, and now ms seems to own it(?) If you know any details, that would be interesting!
Posted by: georg | Sunday, 10 June 2007 at 08:00 AM
Reminds me of the scene in Blade Runner where he's searching through a photograph, into other rooms and around corners.
Posted by: JimB | Sunday, 10 June 2007 at 09:21 AM
OMG....
Posted by: Scott Jones | Monday, 11 June 2007 at 09:05 PM